Search Results

Search found 17407 results on 697 pages for 'static constructor'.

Page 514/697 | < Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >

  • Splitting an MP4 file

    - by Asaf Chertkoff
    what is the fastest and less resource consuming method for splitting an MP4 file? @Alex: it didn't work, i don't know why. see the out put here: asafche@asafche-laptop:~$ ffmpeg -vcodec copy -ss 0 -t 00:10:00 -i /home/asafche/Videos/myVideos/MAH00124.MP4 /home/asafche/Videos/myVideos/eh.mp4 FFmpeg version SVN-r0.5.1-4:0.5.1-1ubuntu1.1, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --extra-version=4:0.5.1-1ubuntu1.1 --prefix=/usr --enable-avfilter --enable-avfilter-lavf --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --disable-stripping --disable-vhook --enable-runtime-cpudetect --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 1 / 52.20. 1 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 0. 4. 0 / 0. 4. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Mar 31 2011 18:53:20, gcc: 4.4.3 Seems stream 0 codec frame rate differs from container frame rate: 119.88 (120000/1001) -> 59.94 (60000/1001) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/home/asafche/Videos/myVideos/MAH00124.MP4': Duration: 00:15:35.96, start: 0.000000, bitrate: 5664 kb/s Stream #0.0(und): Video: h264, yuv420p, 1280x720, 59.94 tbr, 59.94 tbn, 119.88 tbc Stream #0.1(und): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to '/home/asafche/Videos/myVideos/eh.mp4': Stream #0.0(und): Video: libx264, yuv420p, 1280x720, q=2-31, 90k tbn, 59.94 tbc Stream #0.1(und): Audio: 0x0000, 48000 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Unsupported codec for output stream #0.1 it says something about different frame rate...

    Read the article

  • Android Touch Event Collision Detection

    - by chrissb
    I'm relatively new to both Java and Android, so hopefully the problem I'm having is stemming from something pretty minor that I've overlooked. I've got a (very early stage) game that I've started working on, for Android using Java. At this stage, when the user touches the screen, if they touched a point at which there is an enemy, the enemies health is decreased and they become immobile (for the current implementation at least). The issue that I'm having is that the touch detection doesn't always seem to work. I've got a testing sprite set up that goes to the eventX and eventY coordinates of the touch down event, and it always seems to collide with the enemy object. Yet, the enemy doesn't always register as being hit, and sometimes a hit is registered when the sprite indicates the touch coordinates were outside of the enemies bounding box. I realise that this probably doesn't mean much without any code, so here's what I've got so far. Be gentle, as this is literally my first attempt at something more than basic movement etc. First off, the MainGamePanel class registers the touch event, and informs the levelmanager class (which is what I set up to monitor/handle enemies) public boolean onTouchEvent(MotionEvent event) { if (event.getAction() == MotionEvent.ACTION_DOWN){ levelManager.handleActionDown((int)event.getX(), (int)event.getY()); targetX=event.getX(); targetY=event.getY(); } if (event.getAction() == MotionEvent.ACTION_MOVE) { //the gestures } if (event.getAction() == MotionEvent.ACTION_UP) { //touch was released } return true; } From there, in the levelmanager class the touch event is passed on to all of the enemies within a list array: public static void handleActionDown(int eventX,int eventY){ hit=false; for (enemy1 en : enemy1array){ en.handleActionDown(eventX, eventY); } } The rest of the collision code is handled within the enemies handleActionDown function: public void handleActionDown(int eventX, int eventY) { if(eventX>this.x-enemy1bitmap.getWidth() && eventX<this.x+enemy1bitmap.getWidth() && eventY>this.y-enemy1bitmap.getHeight() && eventY<this.x+enemy1bitmap.getHeight()){ takeDamage(1); levelmanager.setHit(); } } I should probably be using getWidth()/2 and getHeight()/2 for it to be more accurate, but I expanded the area to test this - although I've noticed no improvement. At this stage, the games detection over whether or not the enemy is hit is spotty at best. Generally it takes two or three attempts before a collision is successfully registered, even though the sprite that is being used for testing and set to the eventX and eventY coordinates always indicates that the collision should have worked. Hopefully someone can steer me in the right direction here, and if more information is needed, ask away! Cheers, -Chris

    Read the article

  • Ubuntu systematically losing wired connection

    - by Lukasz Baczynski
    I'm working on 11.10 for few recent days, everything was working perfect until today. Updated ubuntu (some certs were updates as far as i remember) and from this time, wired network stops working randomly and systematically. (All other pcs/macs work fine) From 192.168.0.9 icmp_seq=25 Destination Host Unreachable From 192.168.0.9 icmp_seq=26 Destination Host Unreachable From 192.168.0.9 icmp_seq=27 Destination Host Unreachable From 192.168.0.9 icmp_seq=28 Destination Host Unreachable From 192.168.0.9 icmp_seq=29 Destination Host Unreachable From 192.168.0.9 icmp_seq=30 Destination Host Unreachable From 192.168.0.9 icmp_seq=31 Destination Host Unreachable 64 bytes from 192.168.0.1: icmp_req=32 ttl=64 time=1003 ms 64 bytes from 192.168.0.1: icmp_req=33 ttl=64 time=0.496 ms 64 bytes from 192.168.0.1: icmp_req=34 ttl=64 time=0.576 ms 64 bytes from 192.168.0.1: icmp_req=35 ttl=64 time=0.522 ms 64 bytes from 192.168.0.1: icmp_req=36 ttl=64 time=0.624 ms 64 bytes from 192.168.0.1: icmp_req=37 ttl=64 time=0.625 ms 64 bytes from 192.168.0.1: icmp_req=38 ttl=64 time=0.555 ms It'll work for 20 seconds then it'll stop working for 10-30sec and so on. I've tried setting my router to give static IPs, it doesn't help. NOTHING has been changed since yesterday beside the package update... Here are other settings that may be useful: baka@baka-PC:~/Private/projects/wduk$ lspci -nnk | grep -iA2 net 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 06) Subsystem: ASUSTeK Computer Inc. P8P67 and other motherboards [1043:8432] Kernel driver in use: r8169 baka@baka-PC:~/Private/projects/wduk$ ifconfig -a eth0 Link encap:Ethernet HWaddr **Removed MAC address** inet addr:192.168.0.9 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::ca60:ff:fe0a:85b2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6400 errors:0 dropped:6400 overruns:0 frame:6400 TX packets:7085 errors:0 dropped:107 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4191983 (4.1 MB) TX bytes:886881 (886.8 KB) Interrupt:72 Base address:0x2000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2522 errors:0 dropped:0 overruns:0 frame:0 TX packets:2522 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1070130 (1.0 MB) TX bytes:1070130 (1.0 MB) baka@baka-PC:~/Private/projects/wduk$ cat /etc/resolv.conf # Generated by NetworkManager nameserver 8.8.8.8 thanks for help

    Read the article

  • Difference between the terms Material & Effect

    - by codey
    I'm making an effect system right now (I think, because it may be a material system... or both!). The effects system follows the common (e.g. COLLADA, DirectX) effect framework abstraction of Effects have Techniques, Techniques have Passes, Passes have States & Shader Programs. An effect, according to COLLADA, defines the equations necessary for the visual appearance of geometry and screen-space image processing. Keeping with the abstraction, effects contain techniques. Each effect can contain one or many techniques (i.e. ways to generate the effect), each of which describes a different method for rendering that effect. The technique could be relate to quality (e.g. high precision, high LOD, etc.), or in-game-situation (e.g. night/day, power-up-mode, etc.). Techniques hold a description of the textures, samplers, shaders, parameters, & passes necessary for rendering this effect using one method. Some algorithms require several passes to render the effect. Pipeline descriptions are broken into an ordered collection of Pass objects. A pass provides a static declaration of all the render states, shaders, & settings for "one rendering pipeline" (i.e. one pass). Meshes usually contain a series of materials that define the model. According to the COLLADA spec (again), a material instantiates an effect, fills its parameters with values, & selects a technique. But I see material defined differently in other places, such as just the Lambert, Blinn, Phong "material types/shaded surfaces", or as Metal, Plastic, Wood, etc. In game dev forums, people often talk about implementing a "material/effect system". Is the material not an instance of an effect? Ergo, if I had effect objects, stored in a collection, & each effect instance object with there own parameter setting, then there is no need for the concept of a material... Or am I interpreting it wrong? Please help by contributing your interpretations as I want to be clear on a distinction (if any), & don't want to miss out on the concept of a material if it should be implemented to follow the abstraction of the DirectX FX framework & COLLADA definitions closely.

    Read the article

  • More Efficient Data Structure for Large Layered Tile Map

    - by Stupac
    It seems like the popular method is to break the map up into regions and load them as needed, my problem is that in my game there are many AI entities other than the player out performing actions in virtually all the regions of the map. Let's just say I have a 5000x5000 map, when I use a 2D array of byte's to render it my game uses around 17 MB of memory, as soon as I change that data structure to a my own defined MapCell class (which only contains a single field: byte terrain) my game's memory consumption rockets up to 400+ MB. I plan on adding layering, so an array of byte's won't cut it and I figure I'd need to add a List of some sort to the MapCell class to provide objects in the layers. I'm only rendering tiles that are on screen, but I need the rest of the map to be represented in memory since it is constantly used in Update. So my question is, how can I reduce the memory consumption of my map while still maintaining the above requirements? Thank you for your time! Here's a few snippets my C# code in XNA4: public static void LoadMapData() { // Test map generations int xSize = 5000; int ySize = 5000; MapCell[,] map = new MapCell[xSize,ySize]; //byte[,] map = new byte[xSize, ySize]; Terrain[] terrains = new Terrain[4]; terrains[0] = grass; terrains[1] = dirt; terrains[2] = rock; terrains[3] = water; Random random = new Random(); for(int x = 0; x < xSize; x++) { for(int y = 0; y < ySize; y++) { //map[x,y] = new MapCell(terrains[random.Next(4)]); map[x,y] = new MapCell((byte)random.Next(4)); //map[x, y] = (byte)random.Next(4); } } testMap = new TileMap(map, xSize, ySize); // End test map setup currentMap = testMap; } public class MapCell { //public TerrainType terrain; public byte terrain; public MapCell(byte itsTerrain) { terrain = itsTerrain; } // the type of terrain this cell is treated as /*public Terrain terrain { get; set; } public MapCell(Terrain itsTerrain) { terrain = itsTerrain; }*/ }

    Read the article

  • Stencil buffer appears to not be decrementing values correctly

    - by Alex Ames
    I'm attempting to use the stencil buffer as a clipper for my UI system, but I'm having trouble debugging a problem I'm running in to. This is what I'm doing: A widget can pass a rectangle to the the stencil clipper functions, which will increment the stencil buffer values that it covers. Then it will draw its children, which will only get drawn in the stencilled area (so that if they extend outside they'll be clipped). After a widget is done drawing its children, it pops that rectangle from the stack and in the process decrements the values in the stencil buffer that it has previously incremented. The slightly simplified code is below: static void drawStencil(Rect& rect, unsigned int ref) { // Save previous values of the color and depth masks GLboolean colorMask[4]; GLboolean depthMask; glGetBooleanv(GL_COLOR_WRITEMASK, colorMask); glGetBooleanv(GL_DEPTH_WRITEMASK, &depthMask); // Turn off drawing glColorMask(0, 0, 0, 0); glDepthMask(0); // Draw vertices here ... // Turn everything back on glColorMask(colorMask[0], colorMask[1], colorMask[2], colorMask[3]); glDepthMask(depthMask); // Only render pixels in areas where the stencil buffer value == ref glStencilFunc(GL_EQUAL, ref, 0xFF); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); } void pushScissor(Rect rect) { // increment things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_INCR, GL_INCR); s_scissorStack.push_back(rect); drawStencil(rect, states, s_ScissorStack.size()); } void popScissor() { // undo what was done in the previous push, // decrement things only at the current stencil stack level glStencilFunc(GL_EQUAL, s_scissorStack.size(), 0xFF); glStencilOp(GL_KEEP, GL_DECR, GL_DECR); Rect rect = s_scissorStack.back(); s_scissorStack.pop_back(); drawStencil(rect, states, s_scissorStack.size()); } And this is how it's being used by the Widgets if (m_clip) pushScissor(m_rect); drawInternal(target, states); for (auto child : m_children) target.draw(*child, states); if (m_clip) popScissor(); This is the result of the above code: There are two things on the screen, a giant test button, and a window with some buttons and text areas on it. The text area scroll box is set to clip its children (so that the text doesn't extend outside the scroll box). The button is drawn after the window and should be on top of it completely. However, for some reason the text area is appearing on top of the button. The only reason I can think of that this would happen is if the stencil values were not getting decremented in the pop, and when it comes time to render the button, since those pixels don't have the right stencil value it doesn't draw over. But I can't figure out whats wrong with my code that would cause that to happen.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • Ubuntu Server Cannot Route to the Internet

    - by ejes
    I've been having this problem for weeks now, and I can't seem to figure out the problem. My server can route the local network and serves it well, however it cannot access the internet. It can't be the router because everything else on this lan can route through the router. I've even switched the ethernet port. Any help would be appreciated. I've tried all the usual places, anyway, here are the configs: root@uhs:~# uname -a Linux uhs 3.0.0-16-generic-pae #28-Ubuntu SMP Fri Jan 27 19:24:01 UTC 2012 i686 i686 i386 GNU/Linux root@uhs:~# cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface # auto eth1 # iface eth1 inet dhcp auto eth0 iface eth0 inet static address 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 root@uhs:~# ping -c 4 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=0.334 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.339 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.324 ms 64 bytes from 192.168.0.1: icmp_req=4 ttl=64 time=0.339 ms --- 192.168.0.1 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2997ms rtt min/avg/max/mdev = 0.324/0.334/0.339/0.006 ms root@uhs:~# ping -c 4 209.85.145.103 PING 209.85.145.103 (209.85.145.103) 56(84) bytes of data. --- 209.85.145.103 ping statistics --- 4 packets transmitted, 0 received, 100% packet loss, time 3023ms root@uhs:~# ifconfig eth0 Link encap:Ethernet HWaddr 00:0c:6e:a0:92:6e inet addr:192.168.0.3 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:6eff:fea0:926e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:13131114 errors:0 dropped:0 overruns:0 frame:0 TX packets:10540297 errors:0 dropped:0 overruns:5 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3077922794 (3.0 GB) TX bytes:3827489734 (3.8 GB) Interrupt:10 Base address:0xa000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:7721 errors:0 dropped:0 overruns:0 frame:0 TX packets:7721 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:551950 (551.9 KB) TX bytes:551950 (551.9 KB) root@uhs:~# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 root@uhs:~# # PRETEND Traceroute root@uhs:~# for i in {1..30}; do ping -t $i -c 1 209.85.145.103; done | grep "Time to live exceeded" root@uhs:~#

    Read the article

  • Ubuntu Server and setting up two nic cards

    - by kmalik
    I have ubuntu server on a computer with a wireless and hardwired nic card. The wireless needs to get the internet and pass it to the ubuntu server as well as pass it along to the hardwired nic card to more computers. I am having issues getting the basic set up as I believe the route table is grabbing from the wrong nic card. The router is 192.168.1.0 and the server is set to 192.168.1.11 on the wireless card through DHCP ETH0 (wired nic card) is set up to be 10.10.10.0 and the server is 10.10.10.1) I am not a linux or networking guru but basically I am trying to have internet come from a guest network 192.168.1.0 i believe to give internet to the ubuntu server then the ubuntu server will also A) have the wired nic serve DHCP addresses to other computers via a switch or router (that acts as a switch) via 10.10.10.0 addresses. And I would love if it also passed along internet capabilities as well if possible. Bu really at this point my hope is to at least get the internet working on the server and the DHCP to pass correctly. At the moment the specific issue I am having is getting ubuntu server to connect to the internet and have both nic cards up and running correctly. Any help would be appreciated! The route table is as follows: Destination Gateway GM Flags Metric Iface 0.0.0.0 10.10.10.1 0.0.0.0 UG 100 eth0 10.10.10.0 0.0.0.0 255.255.255.0 U 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 eth0 1992.168.1.0 0.0.0.0 255.255.255.255.0 U 0 eth1 My interfaces is set up as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 10.10.10.1 netmask 255.255.255.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.1 domain-name-servers 192.168.1.0 auto eth1 iface eth1 inet dhcp netmask 255.255.255.0 gateway 192.168.1.0 wpa-driver wext wpa-ssid "ssid_name" wpa-ap-scan 1 wpa-proto wpa wpa-pairwise ccmp wpa-group ccmp wpa-key-mgmt wpa-psk wpa-psk "HASH" My DHCPD.conf (as there is a domain name server on here is as follows): ddns-update-style none default-lease-time 600 max-lease-time 7200 authoritative option domain-name "Kamron's Network" option subnet-mask 255.255.255.0 option broadcast-address 10.10.10.255 option routers 192.168.1.0 option domain-name-server 192.168.1.0 98.223.128.213 ooption subnet 10.10.10.0 netmask 255.255.255.0 { range 10.10.10.10 10.10.10.99 } log-facility local7

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Playing with F#

    - by mroberts
    Project Euler is a awesome site.   When working with a new language it can be tricky to find problems that need solving, that are more complex than "Hello World" and simpler than a full blown application. Project Euler gives use just that, cool and thought provoking problems that usually don't take days to solve.  I've solved a number of questions with C# and some with Java.  BTW, I used Java because it had BigInteger support before .Net. A couple weeks ago, back when winter had a firm grip on Columbus, OH, I began playing (researching) with F#.  I began with Problem #1 from Project Euler.  I started by looking at my solution in C#. Here is my solution in C#. 1: using System; 2: using System.Collections.Generic; 3:   4: namespace Problem001 5: { 6: class Program 7: { 8: static void Main(string[] args) 9: { 10: List<int> values = new List<int>(); 11:   12: for (int i = 1; i < 1000; i++) 13: { 14: if (i % 3 == 0 || i % 5 == 0) 15: values.Add(i); 16: } 17: int total = 0; 18:   19: values.ForEach(v => total += v); 20:   21: Console.WriteLine(total); 22: Console.ReadKey(); 23: } 24: } 25: }   Now, after much tweaking and learning, here is my solution in F#.   1: open System 2:   3: let calc n = 4: [1..n] 5: |> List.map (fun x -> if (x % 3 = 0 || x % 5 = 0) then x else 0) 6: |> List.sum 7:   8: let main() = 9: calc 999 10: |> printfn "result = %d" 11: Console.ReadKey(true) |> ignore 12:   13: main() Just this little example highlights some cool things about F#. Type inference. F# infers the type of a value.  In the C# code above we declare a number of variables, the list, and a couple ints.  F# does not require this, it infers the calc (a function) accepts a int and returns a int. Great built in functionality for Lists.  List.map for example. BTW, I don’t think I’m spilling the beans by giving away the code for Problem 1.  It by far is the easiest question, IMHO, solved by 92,000+ people. Next I’ll look into writing a class library with F#.

    Read the article

  • Bridging 10GbE with 12.04 - bridging works but the bridging computer has no internet access

    - by Donal
    I have been trying to get 12.04 bridging working with two 10GbE cards. I have 2 10GbE cards in a linux box being used only for this bridge, 1 with 2 10GbaseT ports and another with a single CX4 port. I have 2 client computers connected with 10GbaseT cards and the CX4 card connects to a procurve switch. I can get the bridging happening mostly the way that I want, The clients receive dhcp information from the dhcp server (not the bridging machine) and can connect to and properly see the rest of the network. Speeds are ok, not amazing but working on that is another matter. My problem is that the bridging machine has no internet access ... meaning I can't update anything or apt-get anything It can ping all other machines on the local network. I've tried the helpful hints from: https://help.ubuntu.com/community/NetworkConnectionBridge "Enabling Internet Use on the Bridging Computer" and get the following RTNETLINK answers: File exists but dhclient br0 does nothing for me :( I think if it is anything it a multiple route problem as both br0 and eth4 have ipaddresses ... even though I have only set it up so that br0 has one ... Bridge setup details: /etc/network/interface auto br0 iface br0 inet static address 192.168.0.246 netmask 255.255.255.0 gateway 192.168.0.1 broadcast 192.168.0.255 dns-nameservers 192.168.0.1 dns-search example.com dns-domain example.com #(eth2 & eth3 are the 10GbaseT) #(eth4 is the CX4 connection) pre-up ip link set eth2 down pre-up ip link set eth3 down pre-up ip link set eth4 down pre-up brctl addbr br0 pre-up brctl addif br0 eth4 eth3 eth2 pre-up ip addr flush dev eth3 pre-up ip addr flush dev eth2 pre-up ip addr flush dev eth4 post-down ip link set eth4 down post-down ip link set eth2 down post-down ip link set eth3 down post-down ip link set br0 down post-down brctl delif br0 eth2 eth3 eth4 post-down brctl delbr br0 ifconfig -a br0 Link encap:Ethernet HWaddr 00:15:17:22:20:34 inet addr:192.168.0.102 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::215:17ff:fe22:2034/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4957 errors:0 dropped:0 overruns:0 frame:0 TX packets:1077 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:596320 (596.3 KB) TX bytes:139952 (139.9 KB) eth4 Link encap:Ethernet HWaddr 00:60:dd:47:7c:05 inet addr:192.168.0.57 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::260:ddff:fe47:7c05/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:9000 Metric:1 RX packets:15391 errors:0 dropped:51 overruns:0 frame:0 TX packets:1207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5916769 (5.9 MB) TX bytes:154312 (154.3 KB) Interrupt:70 route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth4 0.0.0.0 192.168.0.1 0.0.0.0 UG 100 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth4

    Read the article

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • Drawing lots of tiles with OpenGL, the modern way

    - by Nic
    I'm working on a small tile/sprite-based PC game with a team of people, and we're running into performance issues. The last time I used OpenGL was around 2004, so I've been teaching myself how to use the core profile, and I'm finding myself a little confused. I need to draw in the neighborhood of 250-750 48x48 tiles to the screen every frame, as well as maybe around 50 sprites. The tiles only change when a new level is loaded, and the sprites are changing all the time. Some of the tiles are made up of four 24x24 pieces, and most (but not all) of the sprites are the same size as the tiles. A lot of the tiles and sprites use alpha blending. Right now I'm doing all of this in immediate mode, which I know is a bad idea. All the same, when one of our team members tries to run it, he gets very bad frame rates (~20-30 fps), and it's much worse when there are more tiles, especially when a lot of those tiles are the kind that are cut into pieces. This all makes me think that the problem is the number of draw calls being made. I've thought of a few possible solutions to this, but I wanted to run them by some people who know what they're talking about so I don't waste my time on something stupid: TILES: When a level is loaded, draw all the tiles once into a frame buffer attached to a big honking texture, and just draw a big rectangle with that texture on it every frame. Put all the tiles into a static vertex buffer when the level is loaded, and draw them that way. I don't know if there's a way to draw objects with different textures with a single call to glDrawElements, or if this is even something I'd want to do. Maybe just put all the tiles into a big giant texture and use funny texture coordinates in the VBO? SPRITES: Draw each sprite with a separate call to glDrawElements. Use a dynamic VBO somehow. Same texture question as number 2 above. Point sprites? This is probably silly. Are any of these ideas sensible? Is there a good implementation somewhere I could look over?

    Read the article

  • Oracle JDeveloper 11gR2 Cookbook book review

    - by Chris Muir
    I recently received a free copy of Oracle JDeveloper 11gR2 Cookbook published by Packt Publishing for review. Readers of technical cookbooks would know this genre of text includes problems that developers will hit and the prescribed solutions, in this case for Oracle's Application Development Framework (ADF).  Books like this excel themselves on excellent coverage, a logical progress of solutions through out the book, and providing a readable narrative around the numerous steps and code. This book progresses well through ADF application assembly, ADF Business Components, the view layer, security, deployment and tuning.  Each recipe had a clear introduction and I especially enjoyed the "There's more" follow up sections for some recipes that leads the reader onto related ideas and issues the reader really needs to be aware of. Also worthy of comment having worked with ADF for over 5 years, there certainly was recipes and solutions I hadn't encountered before, this book gets bonus points for that. As a reviewer what negatives can I give this text? The book has cast it's net too wide by trying to cover "everything from design and construction, to deployment, testing, debugging and optimization."  ADF is such a large and sophistication technology, this book with 100 recipes barely scrapes the surface.  Don't expect all your ADF problems to be solved here. In turn there is inconsistency in the level of problems and solutions.  I felt at the beginning the book was pitching itself at advanced problems to solve (that's great for me), but then it introduces topics like building a static View Object or train.  These topics in my opinion are fairly simple and are covered by the Oracle documentation just as well, they shouldn't have been included here.  In conclusion, ADF beginners will find this book worthwhile as it will open your eyes to the wider problems and solutions required for ADF, and experts for just the fact they can point junior programmers at the book for certain problems and say "get on with it". Is there scope for more ADF tombs like this?  Yes!  I'd love to see a cookbook specializing on ADF Business Components (hint hint to budding authors).

    Read the article

  • Unable to Mount an external hard drive (NTFS)

    - by Mediterran81
    Ubuntu 11.10. When I plug my external Drive Western Digital MyPassport (500Go NTFS) I named WD. I get the following error: Unable to mount WD Error mounting: mount exited with exit code 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on /media/WD mount failed I have no problem with the internal NTFS partitions that auto-mounts on startup (ntfs-config does that). If I plug the WD before I boot Ubuntu, upon login, it's recognized and I can access without no problem. But if I remove it using (Safely remove) and then replug it, I get the error above. Here is my fstab: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 #Entry for /dev/sda5 : UUID=24540d0f-5803-493c-ace9-e3b3c0cedb26 / ext4 errors=remount-ro 0 1 #Entry for /dev/sda3 : UUID=E4C43F7EC43F51D2 /media/OS ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda2 : UUID=6A0070F10070C61B /media/RECOVERY ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sdb1 : UUID=EA6854D268549F5F /media/WD ntfs-3g defaults,nosuid,nodev,locale=en_US.UTF-8 0 0 #Entry for /dev/sda6 : UUID=ed077c52-c50e-406c-9120-9cb6f86ec204 none swap sw 0 0 Here is my mtab /dev/sda5 / ext4 rw,errors=remount-ro,commit=0 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 fusectl /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /media/OS fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sda2 /media/RECOVERY fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 /dev/sdb1 /media/WD fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/hanine/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=hanine 0 0 Appearently it cannot be mounted because upon login, it finds that it is already mounted. Some sort of conflict. Does anyone have a clue on how to solve this. Thanks.

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Wired and Wireless Network Issues with PPPoE

    - by user9054
    down vote favorite Hi Friends, I have got this issue with Ubuntu 10.10. I have been with ubuntu 8.04 and then decided to try out ubuntu 10.10 . I booted with a LiveCD and was able to configure the wireless network painlessly using the livecd. So happily i installed ubuntu 10.10. As soon as ubuntu came up it detected the wireless network and i was able to assign a static IP to eth1 (i dont use DHCP option on my ADSL router) and enter a wap key and use pppoeconf to configure the dialer. The net was on and i was able to surf the net. All hunky dory so far. However on the next boot the fun started. It did not detect the wireless network. I could not see the network manager icon in the systray. I used ifconfig and saw that the entry for eth1 was missing. I used ifup eth1 and it said that eth1 was already up . Then i installed wifi-radar. Wifi-Radar detected the wireless network. I configured wifi-radar for the detected wireless network , set the wap driver as wext and used the manual IP settings. However on clicking connect wifi-radar started looking for a DHCP IP , needless to say it failed. For the love of god i cannot understand why wifi-radar is using DHCP when i have specified manual settings . Next i decided to use the wired network to surf the net looking for a solution . So i plugged in the network cable from my modem , it detected the plugged in connection , i configured eth0 , used pppoeconf and connected to the net. Then i foolishly decided to reboot my PC. And wonders of wonders , the same problem appeared. I cannot see eth0 in my ifconfig anymore. I used pon to start the dsl-provider connection and it said something about network error or something . Now my ifconfig shows only lo , both eth0 and eth1 have disappeared. Can anybody help me on this ? Is it a problem with ipv6 , if so how do you disable ipv6 on ubuntu 10.10 ? OR is this is a known issue with ubuntu 10.10 ? PS : 1) i tried linux mint 10 and had the same issue. On rebooting wireless network was not getting detected . 2) i have made myself the administrator so that there is no issue of rights or anything. Any help is appreciated.

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • No client internet access when setting up these iptables rules

    - by Siriss
    I have read many other posts but cannot figure this out. eth0 is my external connected to a Comcast modem. The server has internet access with no issues. eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet. I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1" Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf ddns-update-style interim; ignore client-updates; subnet 10.0.10.0 netmask 255.255.255.0 { range 10.0.10.10 10.0.10.200; option routers 10.0.10.2; option subnet-mask 255.255.255.0; option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS # option domain-name "example.com"; default-lease-time 21600; max-lease-time 43200; authoritative; } I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf here is my interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp iface eth1 inet static address 10.0.10.2 netmask 255.255.255.0 network 10.0.10.0 auto eth1 And finally- here is my iptables.conf file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE #-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i eth1 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT -A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT -A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth1 -j ACCEPT #-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • Why won't my vertex buffer render in GLFW3?

    - by sm81095
    I have started to try to learn OpenGL, and I decided to use GLFW to assist in window creation. The problem is, since GLFW3 is so new, there are no tutorials on it or how to use it with modern OpenGL (3.3, specifically). Using the GLFW3 tutorial found on the website, which uses older OpenGL rendering (glBegin(GL_TRIANGLES), glVertex3f(), and such), I can get a triangle to render to the screen. The problem is, using new OpenGL, I can't get the same triangle to render to the screen. I am new to OpenGL, and GLFW3 is new to most people, so I may be completely missing something obvious, but here is my code: static const GLuint g_vertex_buffer_data[] = { -1.0f, -1.0f, 0.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; int main(void) { GLFWwindow* window; if(!glfwInit()) { fprintf(stderr, "Failed to initialize GLFW."); return -1; } glfwWindowHint(GLFW_SAMPLES, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); window = glfwCreateWindow(800, 600, "Test Window", NULL, NULL); if(!window) { glfwTerminate(); fprintf(stderr, "Failed to create a GLFW window"); return -1; } glfwMakeContextCurrent(window); glewExperimental = GL_TRUE; GLenum err = glewInit(); if(err != GLEW_OK) { glfwTerminate(); fprintf(stderr, "Failed to initialize GLEW"); fprintf(stderr, (char*)glewGetErrorString(err)); return -1; } GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); GLuint programID = LoadShaders("SimpleVertexShader.glsl", "SimpleFragmentShader.glsl"); GLuint vertexBuffer; glGenBuffers(1, &vertexBuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW); while(!glfwWindowShouldClose(window)) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glUseProgram(programID); glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, (void*)0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glfwSwapBuffers(window); glfwPollEvents(); } glDeleteBuffers(1, &vertexBuffer); glDeleteProgram(programID); glfwDestroyWindow(window); glfwTerminate(); exit(EXIT_SUCCESS); } I know it is not my shaders, they are super simple and I've checked them against GLFW 2.7 so I know that they work. I'm assuming that I've missed something crucial to using the OpenGL context with GLFW3, so any help locating the problem would be greatly appreciated.

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • cloud programming for OpenStack in C / C++

    - by Basile Starynkevitch
    (Sorry for such a fuzzy question, I am very newbie to cloud programming) I am interested in designing (and developing) a (free software) program in C or C++ (probably, most of it being meta-programmed, i.e. part of the C code code being generated). I am still in the thinking / designing phase. And I might perhaps give up. For reference, I am the main architect and implementor of GCC MELT, a domain specific language to extend the GCC compiler (the MELT language is translated to C/C++ and is bootstrapped: the MELT to C/C++ translator being written in MELT). And I am dreaming of extending it with some cloud computing abilities. But I am a newbie in cloud computing. (I am only interested in free-software, GPLv3 friendly, based cloud computing, which probably means openstack). I believe that "compiling on the cloud with some enhanced GCC" could make sense (for super-optimizations or static analysis of e.g. an entire Linux distribution, or at least a massive GCC compiled free software like Qt, GCC itself, or the Linux kernel). I'm dreaming of a MELT specific monitoring program which would store, communicate, and and enhance GCC compilation (extended by MELT). So the picture would be that each GCC process (actually the cc1 or cc1plus started by the gcc driver, suitably extended by some MELT extension) would communicate with some monitor. That "monitoring/persisting" program would run "on the cloud" (and probably manage some information produced by GCC e.g. on NoSQL bases). So, how should some (yet to be written) C program (some Linux daemon) be designed to be cloud-friendly? So far, I understood that it should provide some Web service, probably thru a RESTful service, so should use an HTTP server library like onion. And that OpenStack is able to start (e.g. a dozen of) such services. But I don't have a clear picture of what OpenStack brings. So far, I noticed the ability to manage (and distribute) virtual machines (with some Python API). It is less clear how can it distribute some ELF executable, how can it start it, etc. Do you have any references or examples of C / C++ programming on the cloud? How should a "cloud-friendly" (actually, OpenStack friendly) C/C++ server application be designed?

    Read the article

< Previous Page | 510 511 512 513 514 515 516 517 518 519 520 521  | Next Page >