Search Results

Search found 11033 results on 442 pages for 'video chat'.

Page 372/442 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Problems debugging virtual host with Netbeans

    - by WebDevHobo
    I'm running Ubuntu 9.10 I installed apache2 and downloaded the php 5.3.2 tar.gz, untarred and compiled. I then installed xdebug via pecl. After that I installed mysql-server and PhpMyAdmin. PhpMyAdmin can change apache2 settings, and now, xdebug is no longer included in my phpinfo(); overview. I've done a search for all possible php.ini files, but the 2 that I've found both link the xdebug.so file correctly. Yet still, Xdebug won't show. I have used and tested /etc/php5/php.ini and that worked, but it seems apache is now using a different php.ini file, and I don't know how to change that setting. Can anyone tell me how to bring it back to what it used to be? Also, installing apache, php and Xdebug was something I did by use of this video: http://vimeo.com/8005503

    Read the article

  • How do you draw like a Crayon?

    - by Simucal
    Crayon Physics Deluxe is a commercial game that came out recently. Watch the video on the main link to get an idea of what I'm talking about. It allows you to draw shapes and have them react with proper physics. The goal is to move a ball to a star across the screen using contraptions and shapes you build. While the game is basically a wrapper for the popular Box2D Physics Engine, it does have one feature that I'm curious about how it is implemented. Its drawing looks very much like a Crayon. You can see the texture of the crayon and as it draws it varies in thickness and darkness just like an actual crayon drawing would look like. The background texture is freely available here. Close up of crayon drawing - Note the varying darkness What kind of algorithm would be used to render those lines in a way that looks like a Crayon? Is it a simple texture applied with a random thickness and darkness or is there something more going on?

    Read the article

  • h264 RTP timestamp

    - by user269090
    Hi Guys, I have a confusion about the timestamp of h264 RTP packet. I know the wall clock rate of video is 90KHz which I defined in the SIP SDP. The frame rate of my encoder is not exactly 30 FPS, it is variable. It varies from 15 FPS to 30 FPS on the fly. So, I cannot use any fixed timestamp. Could any one tell me the timestamp of the following encoded packet. After 0 milisecond encoded RTP timestamp = 0 (Let the starting timestamp 0) After 50 milisecond encoded RTP timestamp = ? After 40 milisecond encoded RTP timestamp = ? After 33 milisecond encoded RTP timestamp = ? What is the formula when the encoded frame rate is variable? Thank you in advance.

    Read the article

  • problem with hand tracking, opencv

    - by JP Talusan
    I am currently creating an opencv program that identifies a hand in an image and then gets the contour of the hand only, in order for us to get the center (x,y)m in pixels, of the hand. The problem is that whenever the image or video includes an arm or a face, we can't split or separate the hand from the contours of the arm or the face. We are currently using an HSV flesh colored histogram to get the contours of the hand. is there a way to separate them, i just need the hand. also if the picture includes only a hand and some part of the arm. How can we isolate the palm itself from the rest of the picture. all we need is a clear center of the palm. thanks in advanced.

    Read the article

  • Progressbar for mediaplayer using jquery

    - by Geetha
    In my asp.net application i am using mediaplayer to play the audio and video. i am controling volume using javascript code. I want to display a userdefined progress bar. How to create control it. Code: <object id="mediaPlayer" classid="clsid:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" height="1" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject" width="1"> <param name="fileName" value="" /> <param name="animationatStart" value="true" /> <param name="transparentatStart" value="true" /> <param name="autoStart" value="true" /> <param name="showControls" value="true" /> <param name="volume" value="100" /> <param name="loop" value="true" /> </object>

    Read the article

  • Javascript: Mediaplayer and its Progress Bar

    - by Geetha
    Hi All, In my asp.net application i am using mediaplayer to paly the audio and video. i am controling volume using javascript code. I want to display a userdefined progress bar. How to create it. Code: <object id="mediaPlayer" classid="clsid:22D6F312-B0F6-11D0-94AB-0080C74C7E95" codebase="http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701" height="1" standby="Loading Microsoft Windows Media Player components..." type="application/x-oleobject" width="1"> <param name="fileName" value="" /> <param name="animationatStart" value="true" /> <param name="transparentatStart" value="true" /> <param name="autoStart" value="true" /> <param name="showControls" value="true" /> <param name="volume" value="100" /> <param name="loop" value="true" /> </object>

    Read the article

  • Rendering formatted text in a direct3d application

    - by Fire Lancer
    I need to render some formatted text (colours, different font sizes, underlines, bold, etc) however I'm not sure how to go about doing it. D3DXFont only allows text of a single font/size/weight/colour/etc to be rendered at once, and I cant see a practical way to "combine" multiple calls to ID3DXFont::DrawText to do such things... I looked around and there doesn't seem to be any existing libraries that do these things, but I have no idea how to implement such a text renderer, and I couldn't even find any documentation on how such a text render would work, only rendering simple fixed width, ASCII bitmap fonts which looking at it is probably an entirely different approach that is only suitable for rendering simple blocks of text where Unicode is not important. If there's no direct3d font renders capable of doing this, is there any other renderers (eg for use in rendering rich text in a normal window), and would rendering those to a texture in RAM, then uploading that to the video card to render onto the back buffer yield reasonable performance?

    Read the article

  • MVC with cocoa/objective-c

    - by Leonardo
    Hi all, I have a strong j2ee background, and I am trying to move to objective-c for some desktop/iphone programming. I used many java web frameworks with mvc in mind, spring and struts ecc... so I am used to have servlet or controller which pass attributes to jsp pages, which is the view. In jsp pages with jstl you can call this attribute and render to video. In this way controller and view are (in theory) clearly separated. With xcode, I can easily recognize the controller and the view built with IBuilder. All the tutorial I found, shown the controller which go and change directly labels or text fields. So my two questions: seems to me that there's no separation between the two (controller and view), where I am wrong in that ? is there a way for a controller to pack all objects in a kind of context in a j2ee way and have the view read that context ? thanks Leonardo

    Read the article

  • Expose url to webservice

    - by Patrick Peters
    In our project we want to query a document management system for a specific document or movie. The dms returns a URL with the document location (for example: http://mydomain.myserver1.share/mypdf.pdf or http://mydomain.myserver2.share/mymovie.avi). We want to expose the document to internet users and intranet users. The requested file can be large (large video files). Our architecture is like: request goes like: webapp1 - webapp2 - webapp3 - dms response goes like: dms - webapp3 - webapp2 - webapp1 webapp1 could be on the internet. I have have been thinking how we can obfusicate the real url from the dms, due to security issues. I have seen implementations from other webapps where the pdf URL was obfusicated by creating a temp file for the requested document that is specific for the session and user. So other users cannot easily guess the documentname of other users. My question: is there a pattern that deals with exposing company/user vulernable data to the public ? Our development is in C# 3.5.

    Read the article

  • glTexImage2D behavior on iPhone and other OpenGL ES platforms

    - by spurserh
    Hello, I am doing some work which involves drawing video frames in real time in OpenGL ES. Right now I am using glTexImage2D to transfer the data, in the absence of Pixel Buffer Objects and the like. I suspect that the use of glTexImage2D with one or two frames of look-ahead, that is, using several textures so that the glTexImage2D call can be initiated a frame or two ahead, will allow for sufficient parallelism to play in real time if the system is capable of it at all. Is my assumption true that the driver will handle the actual data transfer to the hardware asynchronously after glTexImage2D returns, assuming I don't try to use the texture or call glFinish/glFlush? Is there a better way to do this with OpenGL ES? Thank you very much, Sean

    Read the article

  • Is there a DRM scheme that works?

    - by Simon
    We help our clients to manage and publish their media online - images, video, audio, whatever. They always ask my boss whether they can stop users from copying their media, and he asks me, and I always tell him the same thing: no. If the users can view the media, then a sufficiently determined user will always be able to make a copy. But am I right? I've been asked again today, and I promised my boss I'd ask about it online. So - is there a DRM scheme that will work? One that will stop users making copies without stopping legitimate viewing of the media? And if there isn't, how do I convince my boss?

    Read the article

  • New preg-repleace for youtube

    - by marc
    Welcome, I notice that Youtube make some changes into their website code. Anyone have idea how make it working today ? That's my script (don't work anymore) preg_match('/"video_id": "(.*?)"/', $page, $match); $var_id = $match[1]; preg_match('/"t": "(.*?)"/', $page, $match); $var_t = $match[1]; Look at source of example Youtube video page: http://www.youtube.com/watch?v=w_J27GxPNM0 (yes i like that song very much) Now the t variable can be found under <script> (function() { var isIE = /*@cc_on!@*/false; I dont paste full because it's very long. Regards

    Read the article

  • Any one like me who still believes in AspNet Web forms? Or has evryone switched to MVC

    - by The_AlienCoder
    After years mastering Aspnet webforms I recently decided to try out ASpnet MVC. Naturally my first action was to google 'Aspnet webforms vs Aspnet MVC'. I hoped to get an honest comparison of the two development techniques and guidelines on when to use which one. ..But I was completely turned off by the MVC proponents. In almost every post on the net comparing the two 'platforms' the MVC camp is simply bashing webform developers like me. They go on and on about how wrong and stupid using webforms is. As if what we have been doing the past decade has been pointless - all those websites built(& still running), all those clever controls, the mighty gridview..ALL POINTLESS ! Karl Seguin especially with his stupid webforms rant really turned me off. If his intention was to convert people like me he did the oposite and made me defensive. If anything I am now convinced that the webforms approach is better. consider the following All the critical shortcomings of aspnet webforms have now been Addressed in visual studio 2010 with Aspnet 4.0.- Cleaner html, cleaner control IDs, friendly urls, leaner viewstate etc Why would any one want to implement the mighty gridview and other wonderful controls from scratch ? In MVC you have to do this yourself because ABSTRACTION IS PLAIN STUPID- Instead of writing loops why not just code using 1s and 0s then? A stateless web is a WEAKNESS so why would anyone want to get rid of viewstate? Everyone would like a better implementation but getting rid of it is a step backwards not forward. Unit testing is great but not a critical requirement for most web projects. I thought inline codes were dead with asp. But now they are back and fashionable - Thanks to MVC. I dont know about you people but codebehind was REVOLUTIONARY. Ive had the Ajax Update panel do so many wonderful things without writing a line of code so why demonise it? Ive succesfully implemented a chat client, IM client and Status bar using nothing but the update panel and script manager.Ofcourse you cant use it for everything but most of the time its appropriate. And finally the last word from JQUERY - 'Write less do more !' - That's what webforms is all about ! So Am I the only who still believes in webforms(and it getting better as Aspnet 4.0 has shown) or will it be dead and gone a few years from now like asp? I mean if inline coding is 'the future' why not just switch to PHP !

    Read the article

  • Bluetooth connect to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77%5FCommands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine.

    Read the article

  • Is there a way to test HTTP Live Streaming via an iSight camera?

    - by bpapa
    I'm working on an iPhone app that will use HTTP Live Streaming. Using Apple's provided tools (particularly mediafilesegmenter), I'm able to successfully segment and serve an archived video. Now I want to test Live Streaming stuff. I don't own any sort of camcorder, I just have my iSight built-in to my Mac. Is there a way to leverage this camera to test out Live Streaming? Run iSight from the command line maybe? If so, I need a port number for mediastreamsegmenter.

    Read the article

  • Properly using subprocess.PIPE in python?

    - by Gordon Fontenot
    I'm trying to use subprocess.Popen to construct a sequence to grab the duration of a video file. I've been searching for 3 days, and can't find any reason online as to why this code isn't working, but it keeps giving me a blank result: import sys import os import subprocess def main(): the_file = "/Volumes/Footage/Acura/MDX/2001/Crash Test/01 Acura MDX Front Crash.mov" ffmpeg = subprocess.Popen(['/opt/local/bin/ffmpeg', '-i', the_file], stdout = subprocess.PIPE, ) grep = subprocess.Popen(['grep', 'Duration'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) cut = subprocess.Popen(['cut', '-d', ' ', '-f', '4'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) sed = subprocess.Popen(['sed', 's/,//'], stdin = subprocess.PIPE, stdout = subprocess.PIPE, ) duration = sed.communicate() print duration if __name__ == '__main__': main()

    Read the article

  • How do I reorder vector data using ARM Neon intrinsics?

    - by goldenmean
    This is specifically related to ARM Neon SIMD coding. I am using ARM Neon instrinsics for certain module in a video decoder. I have a vectorized data as follows: There are four 32 bit elements in a Neon register - say, Q0 - which is of size 128 bit. 3B 3A 1B 1A There are another four, 32 bit elements in other Neon register say Q1 which is of size 128 bit. 3D 3C 1D 1C I want the final data to be in order as shown below: 1D 1C 1B 1A 3D 3C 3D 3A What Neon instrinsics can achieve the desired data order?

    Read the article

  • How do you run PartCover with spaces in the path?

    - by nportelli
    I have a msbuild file that I'm trying to run from Hudson CI. It outputs like this "C:\Program Files\Gubka Bob\PartCover .NET 2\PartCover.exe" --target "C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" --target-args "/noisolation" "/testcontainer:C:\CI\Hudson\jobs\Video Raffle\workspace\Source\VideoRaffleCaller\Source\VideoRaffleCaller.Test.Unit\bin\Debug\VideoRaffleCaller.Test.Unit.dll" --include "[VideoRaffleCaller*]*" --output "Coverage\partcover.xml" I get this error Invalid switch "raffle\workspace\source\videorafflecaller\source\videorafflecall er.test.unit\bin\debug\videorafflecaller.test.unit.dll". For switch syntax, type "MSTest /help" WTF? Looks like PartCover doesn't handle spaces in the --target-args well. Or am I missing some quotes somewhere? Has anyone gotten something like to to work?

    Read the article

  • The requested URL /index.php/blog/scaffolding/add was not found on this server.

    - by Masud
    I am new in Codeigniter i am seeing the Video blog tutorials from Codeigniter but when i am useing scaffolding and try to add something give me like this massage. <?php class Blog extends Controller { function Blog() { parent::Controller(); $this->load->scaffolding('entries'); } function index() { $data['title'] = "This is my title of the page"; $data['heading'] = "This is my heading of page"; $data['todo'] = array("First Name: waliullah", "Last Name: Masud", "Full Name: Waliullah Masud"); $this->load->view('blog_view', $data); } } ?

    Read the article

  • Adding audio channel using ffmpeg

    - by Raj
    Hi all, I am working on ffmpeg and trying to add a audio stream on the fly. I am using AudioQueues and I get raw audio buffer. I am encoding audio with linear PCM and hence the audio I get will be of raw format, which I know ffmpeg does accept it. But I cannot figure out how. I have looked into AVStream, where in we have to create a new stream for this audio channel but how do I encode it to a video which is already initialized in another AVStream structure. Overall, I would like to have an idea of the architecture of ffmpeg. I found it difficult to work since it is least documented. Any pointers or details are appreciated. Thanks and Regards, Raj Pawan G

    Read the article

  • Why is my unsafe code block slower than my safe code?

    - by jomtois
    I am attempting to write some code that will expediently process video frames. I am receiving the frames as a System.Windows.Media.Imaging.WriteableBitmap. For testing purposes, I am just applying a simple threshold filter that will process a BGRA format image and assign each pixel to either be black or white based on the average of the BGR pixels. Here is my "Safe" version: public static void ApplyFilter(WriteableBitmap Bitmap, byte Threshold) { // Let's just make this work for this format if (Bitmap.Format != PixelFormats.Bgr24 && Bitmap.Format != PixelFormats.Bgr32) { return; } // Calculate the number of bytes per pixel (should be 4 for this format). var bytesPerPixel = (Bitmap.Format.BitsPerPixel + 7) / 8; // Stride is bytes per pixel times the number of pixels. // Stride is the byte width of a single rectangle row. var stride = Bitmap.PixelWidth * bytesPerPixel; // Create a byte array for a the entire size of bitmap. var arraySize = stride * Bitmap.PixelHeight; var pixelArray = new byte[arraySize]; // Copy all pixels into the array Bitmap.CopyPixels(pixelArray, stride, 0); // Loop through array and change pixels to black or white based on threshold for (int i = 0; i < pixelArray.Length; i += bytesPerPixel) { // i=B, i+1=G, i+2=R, i+3=A var brightness = (byte)((pixelArray[i] + pixelArray[i + 1] + pixelArray[i + 2]) / 3); var toColor = byte.MinValue; // Black if (brightness >= Threshold) { toColor = byte.MaxValue; // White } pixelArray[i] = toColor; pixelArray[i + 1] = toColor; pixelArray[i + 2] = toColor; } Bitmap.WritePixels(new Int32Rect(0, 0, Bitmap.PixelWidth, Bitmap.PixelHeight), pixelArray, stride, 0); } Here is what I think is a direct translation using an unsafe code block and the WriteableBitmap Back Buffer instead of the forebuffer: public static void ApplyFilterUnsafe(WriteableBitmap Bitmap, byte Threshold) { // Let's just make this work for this format if (Bitmap.Format != PixelFormats.Bgr24 && Bitmap.Format != PixelFormats.Bgr32) { return; } var bytesPerPixel = (Bitmap.Format.BitsPerPixel + 7) / 8; Bitmap.Lock(); unsafe { // Get a pointer to the back buffer. byte* pBackBuffer = (byte*)Bitmap.BackBuffer; for (int i = 0; i < Bitmap.BackBufferStride*Bitmap.PixelHeight; i+= bytesPerPixel) { var pCopy = pBackBuffer; var brightness = (byte)((*pBackBuffer + *pBackBuffer++ + *pBackBuffer++) / 3); pBackBuffer++; var toColor = brightness >= Threshold ? byte.MaxValue : byte.MinValue; *pCopy = toColor; *++pCopy = toColor; *++pCopy = toColor; } } // Bitmap.AddDirtyRect(new Int32Rect(0,0, Bitmap.PixelWidth, Bitmap.PixelHeight)); Bitmap.Unlock(); } This is my first foray into unsafe code blocks and pointers, so maybe the logic is not optimal. I have tested both blocks of code on the same WriteableBitmaps using: var threshold = Convert.ToByte(op.Result); var copy2 = copyFrame.Clone(); Stopwatch stopWatch = new Stopwatch(); stopWatch.Start(); BinaryFilter.ApplyFilterUnsafe(copyFrame, threshold); stopWatch.Stop(); var unsafesecs = stopWatch.ElapsedMilliseconds; stopWatch.Reset(); stopWatch.Start(); BinaryFilter.ApplyFilter(copy2, threshold); stopWatch.Stop(); Debug.WriteLine(string.Format("Unsafe: {1}, Safe: {0}", stopWatch.ElapsedMilliseconds, unsafesecs)); So I am analyzing the same image. A test run of an incoming stream of video frames: Unsafe: 110, Safe: 53 Unsafe: 136, Safe: 42 Unsafe: 106, Safe: 36 Unsafe: 95, Safe: 43 Unsafe: 98, Safe: 41 Unsafe: 88, Safe: 36 Unsafe: 129, Safe: 65 Unsafe: 100, Safe: 47 Unsafe: 112, Safe: 50 Unsafe: 91, Safe: 33 Unsafe: 118, Safe: 42 Unsafe: 103, Safe: 80 Unsafe: 104, Safe: 34 Unsafe: 101, Safe: 36 Unsafe: 154, Safe: 83 Unsafe: 134, Safe: 46 Unsafe: 113, Safe: 76 Unsafe: 117, Safe: 57 Unsafe: 90, Safe: 41 Unsafe: 156, Safe: 35 Why is my unsafe version always slower? Is it due to using the back buffer? Or am I doing something wrong? Thanks

    Read the article

  • How do i a vector data using ARM Neon intrinsics?

    - by goldenmean
    Hello, This is specifically related to ARM Neon SIMD coding.I am using ARM Neon instrinsics for certain module in a Video Decoder. I have a vectorized data as follows:- There are four, 32 bit elements in a Neon register say Q0 which is of size 128 bit. 3B 3A 1B 1A There are another four, 32 bit elements in other Neon register say Q1 which is of size 128 bit. 3D 3C 1D 1C I want the final data to be in order as shown below:- 1D 1C 1B 1A 3D 3C 3D 3A Using what Neon instrinscis can achive the desired data order? thanks, -AD

    Read the article

  • UIWebView/MPMoviePlayerController and the "Done" button

    - by David Sowsy
    I am using the UIWebView to load both streaming audio and video. I have properly set up the UIWebView delegate and I am receiving webViewDidStartLoading and webViewFinishedLoading events perfectly. The webview launches a full screen window (likely a MPMoviePlayerController) Apple's MoviePlayer example gets the array of Windows to determine which window the moviePlayerWindow is for adding custom drawing/getting at the GUI components. I believe this to be a bad practice/hack. My expectation is that I should be able to figure out when that button was clicked by either a delegate method or an NSNotification. It may also be the case that I have to poke around subviews or controllers with isKindOf calls, but I don't think those are correct approaches. Are my expectations incorrect, and if so, why? What is the correct way to bind an action to that "Done" button?

    Read the article

  • Spring MVC Web PetClinic Tutorial?

    - by wuntee
    Is there a tutorial that goes along with the PetClinic application? I have been trying to find one, but google is not helping me today. Specifically, I dont understand things like: @Autowired - what does that even mean? @RequestMapping(method = RequestMethod.GET) public String setupForm(@RequestParam("petId") int petId, ModelMap model) { Pet pet = this.clinic.loadPet(petId); model.addAttribute("pet", pet); return "petForm"; } How can a request return just a string? Shouldnt it need to return some sort of ModelAndView? Or does the application somehow redirect to whatever is returned? A lot of confusing concepts - if there is a tutorial, or video (like spring-security has) that would be very helpful. Thanks.

    Read the article

  • In what programing language are the things i actualy care about writin? [closed]

    - by David
    To be more specific and less subjective: In what language are video games like Halo 3/COD 4/ mario cart written? Microsoft word for windows? for mac? The animation software used by big movie studios to make movies like toystory and monsters inc? The software that helps pilots control the F22 raptor? The software that watches the stock market? The software in the computer in my car? The software that makes the internet work? (this one is a bit vague, if more specificness is needed then google specifically) robots?

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >