Search Results

Search found 24073 results on 963 pages for 'mount point'.

Page 104/963 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Ubuntu says FAT16, windows says NTF?

    - by myforwik
    I created a partition on USB harddisk in windows and it reports to be an NTFS partition. Yet in ubuntu 9.10 fdisk says it a FAT16 partition. If I mount with -t ntfs I see nothing, but if I mount without it I see all the files. Can anyone tell me whats going on here? Windows computer disk management definately says its NTFS, and a quick look at the raw data suggests it is NTFS, as I know the FAT16 very well.

    Read the article

  • Testing Firewire 800 port on MacBook Pro

    - by dtlussier
    I am having trouble getting my MacBook pro to mount an external Firewire hard drive. I am able to mount the disk no problem on other Macs, just not my machine. I haven't received any errors from my machine, and don't see anything related to the Firewire port in the logs. Are there good diagnostic tools for this type of problem that come with the Mac? other free alternatives ?

    Read the article

  • Projective texture and deferred lighting

    - by Vodácek
    In my previous question, I asked whether it is possible to do projective texturing with deferred lighting. Now (more than half a year later) I have a problem with my implementation of the same thing. I am trying to apply this technique in light pass. (my projector doesn't affect albedo). I have this projector View a Projection matrix: Matrix projection = Matrix.CreateOrthographicOffCenter(-halfWidth * Scale, halfWidth * Scale, -halfHeight * Scale, halfHeight * Scale, 1, 100000); Matrix view = Matrix.CreateLookAt(Position, Target, Vector3.Up); Where halfWidth and halfHeight is are half of the texture's width and height, Position is the Projector's position and target is the projector's target. This seems to be ok. I am drawing full screen quad with this shader: float4x4 InvViewProjection; texture2D DepthTexture; texture2D NormalTexture; texture2D ProjectorTexture; float4x4 ProjectorViewProjection; sampler2D depthSampler = sampler_state { texture = <DepthTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D normalSampler = sampler_state { texture = <NormalTexture>; minfilter = point; magfilter = point; mipfilter = point; }; sampler2D projectorSampler = sampler_state { texture = <ProjectorTexture>; AddressU = Clamp; AddressV = Clamp; }; float viewportWidth; float viewportHeight; // Calculate the 2D screen position of a 3D position float2 postProjToScreen(float4 position) { float2 screenPos = position.xy / position.w; return 0.5f * (float2(screenPos.x, -screenPos.y) + 1); } // Calculate the size of one half of a pixel, to convert // between texels and pixels float2 halfPixel() { return 0.5f / float2(viewportWidth, viewportHeight); } struct VertexShaderInput { float4 Position : POSITION0; }; struct VertexShaderOutput { float4 Position :POSITION0; float4 PositionCopy : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; output.Position = input.Position; output.PositionCopy=output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { float2 texCoord =postProjToScreen(input.PositionCopy) + halfPixel(); // Extract the depth for this pixel from the depth map float4 depth = tex2D(depthSampler, texCoord); //return float4(depth.r,0,0,1); // Recreate the position with the UV coordinates and depth value float4 position; position.x = texCoord.x * 2 - 1; position.y = (1 - texCoord.y) * 2 - 1; position.z = depth.r; position.w = 1.0f; // Transform position from screen space to world space position = mul(position, InvViewProjection); position.xyz /= position.w; //compute projection float3 projection=tex2D(projectorSampler,postProjToScreen(mul(position,ProjectorViewProjection)) + halfPixel()); return float4(projection,1); } In first part of pixel shader is recovered position from G-buffer (this code I am using in other shaders without any problem) and then is tranformed to projector viewprojection space. Problem is that projection doesn't appear. Here is an image of my situation: The green lines are the rendered projector frustum. Where is my mistake hidden? I am using XNA 4. Thanks for advice and sorry for my English. EDIT: Shader above is working but projection was too small. When I changed the Scale property to a large value (e.g. 100), the projection appears. But when the camera moves toward the projection, the projection expands, as can bee seen on this YouTube video.

    Read the article

  • How to jump to a particular flag in a Unix manpage?

    - by dotancohen
    When reading a Unix manpage in the terminal, how can I jump easily to the description of a particular flag? For instance, I need to know the meaning of the -o flag for mount. I run man mount and want to jump to the place where -o is described. Currently, I search /-o however that option is mentioned in several places before the section that actually describes it, so I must jump around quite a bit. Thanks.

    Read the article

  • Issues glVertexAttribPointer last 2 parameters?

    - by NoobScratcher
    Introduction Hello I will start out by explaining my setup, showing samples as I go along explaining the situation. I'm using these tools: OpenGL 3.3 GLSL 330 C++ Problem The problem is when I render the wavefront obj 3d model it gives a very weird visual glitch the model was supposed to be a square but instead its a triangluated mess with parts of the vertexes pointing in a stretched direction in massive amounts towards the bottom left side of the frustum.... Explanation: I'm using std::vectors to store my wavefront .obj model data using sscanf to get the floating point values into the structure members x,y,z and store them into the Points structure variable p; int index = IndexAssigner(1, 1); ifstream file (list[index].c_str() ); points.push_back(Point()); Point p; int face[4]; while (!file.eof() ) { char modelbuffer[10000]; file.getline(modelbuffer, 10000); switch(modelbuffer[0]) { case 'v' : sscanf(modelbuffer, "v %f %f %f", &p.x, &p.y, &p.z); points.push_back(p); break; case 'f': sscanf(modelbuffer, "f %d %d %d %d", face, face+1, face+2, face+3 ); faces.push_back(face[0]); faces.push_back(face[1]); faces.push_back(face[2]); faces.push_back(face[3]); } //Turn on FileReader aka "RENDER CODE" FileReader = true; } then I render the Points vector using the .data() member of std::vectors to the frustum. Other declarations: int numfloats = 4; float* point=reinterpret_cast<float*>(&points[0]); int num_bytes=numfloats*sizeof(float); Vector declarations: struct Point {float x, y , z; }; std::vector<int>faces; std::vector<Point>points; Render code: glGenBuffers(1, &vertexbuffer); glGenTextures(1, &ModelTexture); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBindTexture(GL_TEXTURE_3D, ModelTexture); glTexImage2D(GL_TEXTURE_2D, 0,GL_RGBA, ModelSurface->w, ModelSurface->h, 0, GL_BGR, GL_UNSIGNED_BYTE, ModelSurface->pixels); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glBufferData(GL_ARRAY_BUFFER, sizeof(points), points.data(), GL_STATIC_DRAW); glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); glEnableVertexAttribArray(3); //Translation Process GLfloat TranslationMatrix[] = { 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 1.0 }; //Send Translation Matrix up to the vertex shader glUniformMatrix4fv(translation, 1, TRUE, TranslationMatrix); glDrawElements( GL_QUADS, faces.size(), GL_UNSIGNED_INT, faces.data()); I tried looking at what was causing this and went through every function every parameter ,etc looked at the man pages. Then found out that it could be my glVertexAttribPointer. Here are the man pages for glVertexAttribPointer http://www.opengl.org/sdk/docs/man/xhtml/glVertexAttribPointer.xml The last 2 parameters is my problem How do I write those 2 last parameters do I try putting the data from Points into it?. glVertexAttribPointer(3, 3, GL_FLOAT, GL_FALSE,num_bytes ,points.data()); How does it work with vectors? Is it fast?* if you can not be bothered too look at the man pages here is the scripts coming from the man pages directly. Stride Specifies the byte offset between consecutive generic vertex attributes. If stride is 0, the generic vertex attributes are understood to be tightly packed in the array. The initial value is 0. Pointer Specifies a pointer to the first component of the first generic vertex attribute in the array. The initial value is 0. If you want my full source - http://ideone.com/fPfkg Thanks Again if you do read this.

    Read the article

  • /etc/exports file not read on boot

    - by netimen
    I added some entries into my /etc/exports file. But when I boot my system, the file is not read. I check it by sudo exportfs — it returns nothing (also I can't mount the exported folders on other systems). Then I run sudo expotfs -a and then again sudo exportfs — and now all my exported folders are listed (and now I can mount the exported folders on other systems). I'm running kubuntu 9.04 with 2.6.28-11-generic kernel

    Read the article

  • External hard drive is recognized but not mounted in OS X

    - by Sam Bryant
    My external HD got unplugged without ejecting and my Mac will no longer mount the drive, but it recognizes it's there. I already tried repairing the disk in Disk Utility, and erasing it in Disk Utility, and it still won't mount. I can't imagine the hardware is actually damaged otherwise it wouldn't even recognize it (Right?). Is there any other software solution I can try? Recovering my files is not a concern.

    Read the article

  • How to add Group in mounting drive in fstab

    - by Master
    I am using this to mount drive at startup /dev/sda5 /media/virtual ntfs defaults,umask=700,uid=1 0 0 This is working fine but i need things 1)By this method all the folders inside the virtual folder have same permissions but i want 700 for virtual directory and 777 for all other directories 2)I want that if i can add group as well in the mount command. Just like uid, if i couol add gid as well. Is it possible

    Read the article

  • automount a windows share

    - by user1632812
    I have this line and it works mount -t cifs -o myuser //192.168.0.12/Public/Docs /mnt/cifs_shares/Docs But then I try with autofs and it doesn't In /etc/auto.master: /mnt/cifs_shares/Docs /etc/auto.cifs_shares and in /etc/auto.cifs_shares Docs -fstype=cifs,rw,noperm,credentials=/etc/credentials.txt ://192.168.0.12/Public/Docs it seems that the thing gets mounted actually, but it turns to be empty. When mounted with mount it's not empty at all What am I missing ? I'm on Centos 6.3 64 bits

    Read the article

  • ls hangs after NFS server reboot

    - by Apikot
    I've got server A and server B. B acts as an nfs server, A mounts from B. Both are running on EC2. Sometimes I have to shut down B and start a new instance (identical instance). After B is back up, trying to do anything inside the mounted directory on A (ls for example) just hangs. I'm trying to set up a cron that checks the status of the mount, and remounts if anything is wrong. Is there any way to check the status of a mount?

    Read the article

  • Root SSH/SFTP Always 777

    - by Fluidbyte
    I have an Ubuntu serve that I'm connecting to via SFTP (and also an SSHFS mount locally). When I move a file to the server via the mount I need it to have permissions set to 777. I've added umask 000 to the .bashrc file at the advice of a friend and it doesn't appear to be working. Basically I'm working completely in a restricted folder and need the root to always leave the permissions open - wether I'm SSH'ed in or moving files to the server.

    Read the article

  • Ubuntu 12.04: Does grub support multidevice boot from btrfs?

    - by Propantriol
    I'm using Ubuntu 12.04 LTS (Precise Pangolin) with grub 1.99. I installed on a RAID10 multidevice BTRFS partition. I thought that multidevice support is included in grub for a while but on boot I'm dropped to a initramfs shell, because it fails to mount root ("invalid argument"). This is actually the same message I get if I want to mount a btrfs volume without executing btrfs device scan. Therefore I wonder: Does Ubuntu's grub version even support boot from btrfs mutlidevice ?

    Read the article

  • Mounting case-sensitive shared folder in VirtualBox

    - by rhettg
    I have an OSX host with a ubuntu VM trying to mount a shared folder. I'm using the options: sudo mount -t vboxsf -o uid=1000,gid=100 pg ~/pg-host The folder mounts fine, however it appears that the mounted directory is case-insensitive, even though my OSX drive is formatted case-sensitive. Are there any options to control this behavior ?

    Read the article

  • Where is the root

    - by smwikipedia
    I read the manual page of the "mount" command, at it reads as below: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. My question is: Where is this "big tree" located?

    Read the article

  • permission not to change on eth0 file?

    - by user133916
    I am logged in as a root user i want to edit my /etc/sysconfig/network-scripts/ifcfg-eth0 file when i edit this showing -- INSERT -- W10: Warning: Changing a readonly file while i have logged in as root user my mount showing [root@s1202 ~]# mount rootfs on / type rootfs (rw) /dev/root on / type ext3 (ro,data=ordered) /dev on /dev type tmpfs (rw) /proc on /proc type proc (rw) /sys on /sys type sysfs (rw) /proc/bus/usb on /proc/bus/usb type usbfs (rw)

    Read the article

  • WPF / C#: Transforming coordinates from an image control to the image source

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • Can't create more than one overlay in Seadragon

    - by XGreen
    Hi everyone, I am trying to add overlays to a seadragon map I am making but for some reason that I can not figure our seadragon ignores all my overlays except the first one. Any help with this is much appreciated. var viewer = null; function init() { Seadragon.Config.autoHideControls = false; viewer = new Seadragon.Viewer("container"); viewer.addEventListener("open", addOverlays); viewer.addControl(makeControl(), Seadragon.ControlAnchor.TOP_RIGHT); $(viewer.getNavControl()).parent().parent().css({ 'top': 10, 'right': 10 }); viewer.openDzi("_assets/Mapdata/dzc_output.xml"); } function makeControl() { var control = document.createElement("a"); var controlText = document.createTextNode(""); control.href = "#"; // so browser shows it as link control.className = "control"; control.appendChild(controlText); Seadragon.Utils.addEvent(control, "click", onControlClick); return control; } function onControlClick(event) { Seadragon.Utils.cancelEvent(event); // don't process link if (!viewer.isOpen()) { return; } // These are the coordinates of europe on this map var x = 0.5398693914203284; var y = 0.21155952391206562; var z = 5; viewer.viewport.panTo(new Seadragon.Point(x, y)); viewer.viewport.zoomTo(z); viewer.viewport.ensureVisible(); } function addOverlays(viewer) { drawer = viewer.drawer; var img = document.createElement("img"); img.src = "_assets/Images/pushpin.png"; $(img).addClass('pushPin'); var overlays = [ { elmt: img, point: new Seadragon.Point(0.51, 0.22) }, { elmt: img, point: new Seadragon.Point(0.20, 0.13) } ]; for (var i = 0; i < overlays.length; i++) { drawer.addOverlay(overlays[i].elmt, overlays[i].point); } } Seadragon.Utils.addEvent(window, "load", init);

    Read the article

  • Echo styles into the mashup of a google map and wordpress custom fields

    - by zac
    Using wordpress, I am pulling in a custom fields from specific posts to fill in the content for a google generated map. I am using this code var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<?php $post_id = 182; $my_post = get_post($post_id); $title = $my_post->post_title; $snip = get_post_meta($post_id, 'mapExcerpt', true); echo $title; echo $snip; ?>') map.addOverlay(marker); I am trying to echo css style blocks but this causes a javascript error var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<?php $post_id = 182; $my_post = get_post($post_id); $title = $my_post->post_title; $snip = get_post_meta($post_id, 'mapExcerpt', true); echo "<div class='theTitle'>"; echo $title; echo "</div>"; echo $snip; ?>') map.addOverlay(marker); I get the error missing ) after argument list and the output is var point = new GLatLng(48.5139,-123.150531); var marker = createMarker(point,"Lime Kiln State Park", '<div class='theTitle'>Site Title</div>Site excerpt') map.addOverlay(marker); Can someone please show me a more elegant (working) solution for this?

    Read the article

  • Transforming coordinates from an image control to the image source in WPF

    - by Gabriel
    I'm trying to learn WPF, so here's a simple question, I hope: I have a window that contains an Image element bound to a separate data object with user-configurable Stretch property <Image Name="imageCtrl" Source="{Binding MyImage}" Stretch="{Binding ImageStretch}" /> When the user moves the mouse over the image, I would like to determine the coordinates of the mouse with respect to the original image (before stretching/cropping that occurs when it is displayed in the control), and then do something with those coordinates (update the image). I know I can add an event-handler to the MouseMove event over the Image control, but I'm not sure how best to transform the coordinates: void imageCtrl_MouseMove(object sender, MouseEventArgs e) { Point locationInControl = e.GetPosition(imageCtrl); Point locationInImage = ??? updateImage(locationInImage); } Now I know I could compare the size of Source to the ActualSize of the control, and then switch on imageCtrl.Stretch to compute the scalars and offsets on X and Y, and do the transform myself. But WPF has all the information already, and this seems like functionality that might be built-in to the WPF libraries somewhere. So I'm wondering: is there a short and sweet solution? Or do I need to write this myself? EDIT I'm appending my current, not-so-short-and-sweet solution. Its not that bad, but I'd be somewhat suprised if WPF didn't provide this functionality automatically: Point ImgControlCoordsToPixelCoords(Point locInCtrl, double imgCtrlActualWidth, double imgCtrlActualHeight) { if (ImageStretch == Stretch.None) return locInCtrl; Size renderSize = new Size(imgCtrlActualWidth, imgCtrlActualHeight); Size sourceSize = bitmap.Size; double xZoom = renderSize.Width / sourceSize.Width; double yZoom = renderSize.Height / sourceSize.Height; if (ImageStretch == Stretch.Fill) return new Point(locInCtrl.X / xZoom, locInCtrl.Y / yZoom); double zoom; if (ImageStretch == Stretch.Uniform) zoom = Math.Min(xZoom, yZoom); else // (imageCtrl.Stretch == Stretch.UniformToFill) zoom = Math.Max(xZoom, yZoom); return new Point(locInCtrl.X / zoom, locInCtrl.Y / zoom); }

    Read the article

  • Dependency Property on ValueConverter

    - by spoon16
    I'm trying to initialize a converter in the Resources section of my UserControl with a reference to one of the objects in my control. When I try to run the application I get an XAML parse exception. XAML: <UserControl.Resources> <converter:PointConverter x:Key="pointConverter" Map="{Binding ElementName=ThingMap}" /> </UserControl.Resources> <Grid> <m:Map x:Name="ThingMap" /> </Grid> Point Converter Class: public class PointConverter : DependencyObject, IValueConverter { public Microsoft.Maps.MapControl.Map Map { get { return (Microsoft.Maps.MapControl.Map)GetValue(MapProperty); } set { SetValue(MapProperty, value); } } // Using a DependencyProperty as the backing store for Map. This enables animation, styling, binding, etc... public static readonly DependencyProperty MapProperty = DependencyProperty.Register("Map", typeof(Microsoft.Maps.MapControl.Map), typeof(PointConverter), null); public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { string param = (string)parameter; Microsoft.Maps.MapControl.Location location = value as Microsoft.Maps.MapControl.Location; if (location != null) { Point point = Map.LocationToViewportPoint(location); if (string.Compare(param.ToUpper(), "X") == 0) return point.X; else if (string.Compare(param.ToUpper(), "Y") == 0) return point.Y; return point; } return null; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { throw new NotImplementedException(); } }

    Read the article

  • JList - deselect when clicking an already selected item

    - by Peter
    If a selected index on a JList is clicked, I want it to de-select. In other words, clicking on the indices actually toggles their selection. Didn't look like this was supported, so I tried list.addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent evt) { java.awt.Point point = evt.getPoint(); int index = list.locationToIndex(point); if (list.isSelectedIndex(index)) list.removeSelectionInterval(index, index); } }); The problem here is that this is being invoked after JList has already acted on the mouse event, so it deselects everything. So then I tried removing all of JList's MouseListeners, adding my own, and then adding all of the default listeners back. That didn't work, since JList would reselect the index after I had deselected it. Anyway, what I eventually came up with is MouseListener[] mls = list.getMouseListeners(); for (MouseListener ml : mls) list.removeMouseListener(ml); list.addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent evt) { java.awt.Point point = evt.getPoint(); final int index = list.locationToIndex(point); if (list.isSelectedIndex(index)) SwingUtilities.invokeLater(new Runnable() { public void run() { list.removeSelectionInterval(index, index); } }); } }); for (MouseListener ml : mls) list.addMouseListener(ml); ... and that works. But I don't like it. Is there a better way?

    Read the article

  • correct fisheye distortion

    - by Will
    I have some points that describe positions in a picture taken with a fisheye lens. I've found this description of how to generate a fisheye effect, but not how to reverse it. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? My function stub looks like this: Point correct_fisheye(const Point& p,const Size& img) { // to polar const Point centre = {img.width/2,img.height/2}; const Point rel = {p.x-centre.x,p.y-centre.y}; const double theta = atan2(rel.y,rel.x); double R = sqrt((rel.x*rel.x)+(rel.y*rel.y)); // fisheye undistortion in here please //... change R ... // back to rectangular const Point ret = Point(centre.x+R*cos(theta),centre.y+R*sin(theta)); fprintf(stderr,"(%d,%d) in (%d,%d) = %f,%f = (%d,%d)\n",p.x,p.y,img.width,img.height,theta,R,ret.x,ret.y); return ret; }

    Read the article

  • Setting up DrJava to work through Friedman / Felleisen "A Little Java"

    - by JDelage
    All, I'm going through the Friedman & Felleisen book "A Little Java, A Few Patterns". I'm trying to type the examples in DrJava, but I'm getting some errors. I'm a beginner, so I might be making rookie mistakes. Here is what I have set-up: public class ALittleJava { //ABSTRACT CLASS POINT abstract class Point { abstract int distanceToO(); } class CartesianPt extends Point { int x; int y; int distanceToO(){ return((int)Math.sqrt(x*x+y*y)); } CartesianPt(int _x, int _y) { x=_x; y=_y; } } class ManhattanPt extends Point { int x; int y; int distanceToO(){ return(x+y); } ManhattanPt(int _x, int _y){ x=_x; y=_y; } } } And on the main's side: public class Main{ public static void main (String [] args){ Point y = new ManhattanPt(2,8); System.out.println(y.distanceToO()); } } The compiler cannot find the symbols Point and ManhattanPt in the program. If I precede each by ALittleJava., I get another error in the main, i.e., an enclosing instance that contains ALittleJava.ManhattanPt is required I've tried to find ressources on the 'net, but the book must have a pretty confidential following and I couldn't find much. Thank you all. JDelage

    Read the article

  • Fast JSON serialization (and comparison with Pickle) for cluster computing in Python?

    - by user248237
    I have a set of data points, each described by a dictionary. The processing of each data point is independent and I submit each one as a separate job to a cluster. Each data point has a unique name, and my cluster submission wrapper simply calls a script that takes a data point's name and a file describing all the data points. That script then accesses the data point from the file and performs the computation. Since each job has to load the set of all points only to retrieve the point to be run, I wanted to optimize this step by serializing the file describing the set of points into an easily retrievable format. I tried using JSONpickle, using the following method, to serialize a dictionary describing all the data points to file: def json_serialize(obj, filename, use_jsonpickle=True): f = open(filename, 'w') if use_jsonpickle: import jsonpickle json_obj = jsonpickle.encode(obj) f.write(json_obj) else: simplejson.dump(obj, f, indent=1) f.close() The dictionary contains very simple objects (lists, strings, floats, etc.) and has a total of 54,000 keys. The json file is ~20 Megabytes in size. It takes ~20 seconds to load this file into memory, which seems very slow to me. I switched to using pickle with the same exact object, and found that it generates a file that's about 7.8 megabytes in size, and can be loaded in ~1-2 seconds. This is a significant improvement, but it still seems like loading of a small object (less than 100,000 entries) should be faster. Aside from that, pickle is not human readable, which was the big advantage of JSON for me. Is there a way to use JSON to get similar or better speed ups? If not, do you have other ideas on structuring this? (Is the right solution to simply "slice" the file describing each event into a separate file and pass that on to the script that runs a data point in a cluster job? It seems like that could lead to a proliferation of files). thanks.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >