Search Results

Search found 2798 results on 112 pages for 'pull'.

Page 81/112 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • What's wrong with my wireless?

    - by dazzle
    I am having issues with my wireless connection. My connection is constantly disconnecting, then attempting to reconnect, reconnecting momentarily, then disconnecting etc. on times scales that range from seconds to minutes. In the meantime, needless to say I'm having significant packet loss. I'm running Ubuntu 14.04 64bit, updated and upgraded to today. Here is my card and driver: delta@sager:~$ lspci -vq | grep -i wireless -B 1 -A 5 04:00.0 Network controller: Intel Corporation Wireless 7260 (rev 73) Subsystem: Intel Corporation Dual Band Wireless-AC 7260 Flags: bus master, fast devsel, latency 0, IRQ 47 Memory at f7d00000 (64-bit, non-prefetchable) [size=8K] Capabilities: Kernel driver in use: iwlwifi Here is my kernel: delta@sager:~$ uname -r 3.13.0-34-generic None of the other machines on my home network are having these issues. Windows Vista is networking without issue for goodness sake ;-) Here is a small clipping from the output of dmesg. As you can see, I am getting a cfg80211 message of some sort over and over again (FYI, I've replaced my MAC address with a series of dashes, so anytime there is a ---------------, that was where the MAC address was: [ 1881.739161] wlan1: authenticate with --------------- [ 1881.741561] wlan1: send auth to --------------- (try 1/3) [ 1881.743440] wlan1: authenticated [ 1881.746027] wlan1: associate with --------------- (try 1/3) [ 1881.749244] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1881.754727] wlan1: associated [ 1881.754827] cfg80211: Calling CRDA for country: US [ 1881.761552] cfg80211: Regulatory domain changed to country: US [ 1881.761559] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1881.761564] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1881.761568] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1881.761571] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761574] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761577] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1881.761580] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1881.761584] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1882.391038] cfg80211: Calling CRDA to update world regulatory domain [ 1882.396254] cfg80211: World regulatory domain updated: [ 1882.396260] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1882.396265] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396268] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396271] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1882.396274] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1882.396277] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.148252] wlan1: authenticate with --------------- [ 1886.150005] wlan1: send auth to --------------- (try 1/3) [ 1886.151807] wlan1: authenticated [ 1886.154847] wlan1: associate with --------------- (try 1/3) [ 1886.158147] wlan1: RX AssocResp from --------------- (capab=0x411 status=0 aid=4) [ 1886.163464] wlan1: associated [ 1886.163520] wlan1: Limiting TX power to 30 (30 - 0) dBm as advertised by --------------- [ 1886.163588] cfg80211: Calling CRDA for country: US [ 1886.170500] cfg80211: Regulatory domain changed to country: US [ 1886.170508] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1886.170513] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2700 mBm) [ 1886.170517] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 1700 mBm) [ 1886.170520] cfg80211: (5250000 KHz - 5330000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170523] cfg80211: (5490000 KHz - 5600000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170526] cfg80211: (5650000 KHz - 5710000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1886.170529] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 3000 mBm) [ 1886.170533] cfg80211: (57240000 KHz - 63720000 KHz @ 2160000 KHz), (N/A, 4000 mBm) [ 1887.200197] cfg80211: Calling CRDA to update world regulatory domain [ 1887.203655] cfg80211: World regulatory domain updated: [ 1887.203659] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 1887.203662] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203664] cfg80211: (2457000 KHz - 2482000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203666] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 1887.203668] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 1887.203670] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) I've poked around on AskUbuntu, and have not found any adequate solutions; have also found similar threads that were left unanswered. Any advice/experience/threads I might be able to pull on would be greatly appreciated. In your opinion, is this a kernel issue, hardware issue, etc.? Thanks in advance. EDIT: chili, here's the output of iwconfig: delta@sager:~$ iwconfig wlan1 IEEE 802.11abg ESSID:"LANbeforetime" Mode:Managed Frequency:2.412 GHz Access Point: ----------- Bit Rate=48 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=44/70 Signal level=-66 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:80 Missed beacon:0 eth0 no wireless extensions. lo no wireless extensions.

    Read the article

  • Best approach to depth streaming via existing codec

    - by Kevin
    I'm working on a development system (and game) intended for games set mostly in static third-person views. We produce our scenery by CG and photographic techniques. Our background art is rendered off-line by a production-grade renderer. To allow the runtime imagery to properly interact with the background art, I wrote a program to convert from depth output by Mental Ray into a texture, and a pixel shader to draw a quad such that the Z data comes from the texture. This technique is working out very well, but now we've decided that some of the camera angle changes between scenes should be animated. The animation itself is straightforward to produce from our CG models. We intend to encode it to some HD video codec such as H.264. The problem is that in order to maintain our runtime imagery on the screen, the depth buffer will need to be loaded for each video frame. Due to the bandwidth, the video's depth data will need to be compressed efficiently. I've looked into methods for performing temporal compression of depth info and found an interesting research paper here: http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/depth-streaming.pdf The method establishes a mapping between 16-bit depth values and YCbCr values. The mapping is tuned to the properties of existing video codecs in order to maximize precision of the decoded depths after the YCbCr has undergone video compression. It allows an existing, unmodified video codec to be used on the backend. I'm looking at how to pull this off with the least possible work. (This design change was unplanned.) Our game engine itself is native C++, presently for Win32 and DirectX, although we've worked hard to keep platform dependence segregated because we intend other ports. We don't have motion video facilities in the engine yet but will ultimately need that anyway for cinematics. I was planning on using some off-the-shelf motion video solution we can plug into our engine, and haven't chosen one yet. This new added requirement makes selecting one harder since, among other things, we'll now need to bypass colourspace conversion on one of the streams, and also will need to be playing two streams simultaneously in lockstep, on top of in some cases audio on one of them (for the cinematics). I'm also wondering if it's possible (or even useful) to do the conversion from YCbCr to depth in a pixel shader, or if it's better to just do it in CPU and separately load the resulting depth values into a locked tex. The conversion unfortunately does involve branching logic per-pixel. (There are more naive mappings that don't need branching, but they produce inferior results.) It could be reduced to a table lookup but the table would be 32MB. Programming is second-nature to me but I'm not that experienced with pix shaders and have zero knowledge of off-the-shelf video solutions. I'd therefore be interested in advice from others who may have dealt more with depth streaming, pixel shaders, and/or off-the-shelf codecs, regarding how feasible the proposed application is and what off-the-shelf video systems out there would best get along with this usage case.

    Read the article

  • ORE graphics using Remote Desktop Protocol

    - by Sherry LaMonica
    Oracle R Enterprise graphics are returned as raster, or bitmap graphics. Raster images consist of tiny squares of color information referred to as pixels that form points of color to create a complete image. Plots that contain raster images render quickly in R and create small, high-quality exported image files in a wide variety of formats. However, it is a known issue that the rendering of raster images can be problematic when creating graphics using a Remote Desktop connection. Raster images do not display in the windows device using Remote Desktop under the default settings. This happens because Remote Desktop restricts the number of colors when connecting to a Windows machine to 16 bits per pixel, and interpolating raster graphics requires many colors, at least 32 bits per pixel.. For example, this simple embedded R image plot will be returned in a raster-based format using a standalone Windows machine:  R> library(ORE) R> ore.connect(user="rquser", sid="orcl", host="localhost", password="rquser", all=TRUE)  R> ore.doEval(function() image(volcano, col=terrain.colors(30))) Here, we first load the ORE packages and connect to the database instance using database login credentials. The ore.doEval function executes the R code within the database embedded R engine and returns the image back to the client R session. Over a Remote Desktop connection under the default settings, this graph will appear blank due to the restricted number of colors. Users who encounter this issue have two options to display ORE graphics over Remote Desktop: either raise Remote Desktop's Color Depth or direct the plot output to an alternate device. Option #1: Raise Remote Desktop Color Depth setting In a Remote Desktop session, all environment variables, including display variables determining Color Depth, are determined by the RCP-Tcp connection settings. For example, users can reduce the Color Depth when connecting over a slow connection. The different settings are 15 bits, 16 bits, 24 bits, or 32 bits per pixel. To raise the Remote Desktop color depth: On the Windows server, launch Remote Desktop Session Host Configuration from the Accessories menu.Under Connections, right click on RDP-Tcp and select Properties.On the Client Settings tab either uncheck LimitMaximum Color Depth or set it to 32 bits per pixel. Click Apply, then OK, log out of the remote session and reconnect.After reconnecting, the Color Depth on the Display tab will be set to 32 bits per pixel.  Raster graphics will now display as expected. For ORE users, the increased color depth results in slightly reduced performance during plot creation, but the graph will be created instead of displaying an empty plot. Option #2: Direct plot output to alternate device Plotting to a non-windows device is a good option if it's not possible to increase Remote Desktop Color Depth, or if performance is degraded when creating the graph. Several device drivers are available for off-screen graphics in R, such as postscript, pdf, and png. On-screen devices include windows, X11 and Cairo. Here we output to the Cairo device to render an on-screen raster graphic.  The grid.raster function in the grid package is analogous to other grid graphical primitives - it draws a raster image within the current plot's grid.  R> options(device = "CairoWin") # use Cairo device for plotting during the session R> library(Cairo) # load Cairo, grid and png libraries  R> library(grid) R> library(png)  R> res <- ore.doEval(function()image(volcano,col=terrain.colors(30))) # create embedded R plot  R> img <- ore.pull(res, graphics = TRUE)$img[[1]] # extract image  R> grid.raster(as.raster(readPNG(img)), interpolate = FALSE) # generate raster graph R> dev.off() # turn off first device   By default, the interpolate argument to grid.raster is TRUE, which means that what is actually drawn by R is a linear interpolation of the pixels in the original image. Setting interpolate to FALSE uses a sample from the pixels in the original image.A list of graphics devices available in R can be found in the Devices help file from the grDevices package: R> help(Devices)

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • Using HTML5 Today part 3&ndash; Using Polyfills

    - by Steve Albers
    Shims helps when adding semantic tags to older IE browsers, but there is a huge range of other new HTML5 features that having varying support on browsers.  Polyfills are JavaScript code and/or browser plug-ins that can provide older or less featured browsers with API support.  The best polyfills will detect the whether the current browser has native support, and only adds the functionality if necessary.  The Douglas Crockford JSON2.js library is an example of this approach: if the browser already supports the JSON object, nothing changes.  If JSON is not available, the library adds a JSON property in the global object. This approach provides some big benefits: It lets you add great new HTML5 features to your web sites sooner. It lets the developer focus on writing to the up-and-coming standard rather than proprietary APIs. Where most one-off legacy code fixes tends to break down over time, well done polyfills will stop executing over time (as customer browsers natively support the feature) meaning polyfill code may not need to be tested against new browsers since they will execute the native methods instead. Your should also remember that Polyfills represent an entirely separate code path (and sometimes plug-in) that requires testing for support.  Also Polyfills tend to run on older browsers, which often have slower JavaScript performance.  As a result you might find that performance on older browsers is not comparable. When looking for Polyfills you can start by checking the Modernizr GitHub wiki or the HTML5 Please site. For an example of a polyfill consider a page that writes a few geometric shapes on a <canvas> <script src="jquery-1.7.1.min.js"><script> <script> $(document).ready(function () { drawCanvas(); }); function drawCanvas() { var context = $("canvas")[0].getContext('2d'); //background context.fillStyle = "#8B0000"; context.fillRect(5, 5, 300, 100); // emptybox context.strokeStyle = "#B0C4DE"; context.lineWidth = 4; context.strokeRect(20, 15, 80, 80); // circle context.arc(160, 55, 40, 0, Math.PI * 2, false); context.fillStyle = "#4B0082"; context.fill(); </script>   The result is a simple static canvas with a box & a circle:   …to enable this functionality on a pre-canvas browser we can find a polyfill.  A check on html5please.com references  FlashCanvas.  Pull down the zip and extract the files (flashcanvas.js, flash10canvas.swf, etc) to a directory on your site.  Then based on the documentation you need to add a single line to your original HTML file: <!--[if lt IE 9]><script src="flashcanvas.js"></script><![endif]—> …and you have canvas functionality!  The IE conditional comments ensure that the library is only loaded in browsers where it is useful, improving page load & processing time. Like all Polyfills, you should test to verify the functionality matches your expectations across browsers you need to support.  For instance the Flash Canvas home page advertises 70% support of HTML5 Canvas spec tests.

    Read the article

  • Where are my date ranges in Analytics coming from?

    - by Jeffrey McDaniel
    In the P6 Reporting Database there are two main tables to consider when viewing time - W_DAY_D and W_Calendar_FS.  W_DAY_D is populated internally during the ETL process and will provide a row for every day in the given time range. Each row will contain aspects of that day such as calendar year, month, week, quarter, etc. to allow it to be used in the time element when creating requests in Analytics to group data into these time granularities. W_Calendar_FS is used for calculations such as spreads, but is also based on the same set date range. The min and max day_dt (W_DAY_D) and daydate (W_Calendar_FS) will be related to the date range defined, which is a start date and a rolling interval plus a certain range. Generally start date plus 3 years.  In P6 Reporting Database 2.0 this date range was defined in the Configuration utility.  As of P6 Reporting Database 3.0, with the introduction of the Extended Schema this date range is set in the P6 web application. The Extended Schema uses this date range to calculate the data for near real time reporting in P6.  This same date range is validated and used for the P6 Reporting Database.  The rolling date range means if today is April 1, 2010 and the rolling interval is set to three years, the min date will be 1/1/2010 and the max date will be 4/1/2013.  1/1/2010 will be the min date because we always back fill to the beginning of the year. On April 2nd, the Extended schema services are run and the date range is adjusted there to move the max date forward to 4/2/2013.  When the ETL process is run the Reporting Database will pick up this change and also adjust the max date on the W_DAY_D and W_Calendar_FS. There are scenarios where date ranges affecting areas like resource limit may not be adjusted until a change occurs to cause a recalculation, but based on general system usage these dates in these tables will progress forward with the rolling intervals. Choosing a large date range can have an effect on the ETL process for the P6 Reporting Database. The extract portion of the process will pull spread data over into the STAR. The date range defines how long activity and resource assignment spread data is spread out in these tables. If an activity lasts 5 days it will have 5 days of spread data. If a project lasts 5 years, and the date range is 3 years the spread data after that 3 year date range will be bucketed into the last day in the date range. For the overall project and even the activity level you will still see the correct total values.  You just would not be able to see the daily spread 5 years from now. This is an important question when choosing your date range, do you really need to see spread data down to the day 5 years in the future?  Generally this amount of granularity years in the future is not needed. Remember all those values 5, 10, 15, 20 years in the future are still available to report on they would be in more of a summary format on the activity or project.  The data is always there, the level of granularity is the decision.

    Read the article

  • How to place rooms proceduraly (rule based) on in a game word

    - by gardian06
    I am trying to design the algorithm for my level generation which is a rule driven system. I have created all the rules for the system. I have taken care to insure that all rooms make sense in a grid type setup. for example: these rooms could make this configuration The logic flow code that I have so far Door{ Vector3 position; POD orient; // 5 possible values (up is not an option) bool Open; } Room{ String roomRule; Vector3 roomPos; Vector3 dimensions; POD roomOrient; // 4 possible values List doors<Door>; } LevelManager{ float scale = 18f; List usedRooms<Room>; List openDoors<Door> bool Grid[][][]; Room CreateRoom(String rule, Vector3 position, POD Orient){ place recieved values based on rule fill in other data } Vector3 getDimenstions(String rule){ return dimensions of the room } RotateRoom(POD rotateAmount){ rotate all items in the room } MoveRoom(Room toBeMoved, POD orientataion, float distance){ move the position of the room based on inputs } GenerateMap(Vector3 size, Vector3 start, Vector3 end){ Grid = array[size.y][size.x][size.z]; Room floatingRoom; floatingRoom = Room.CreateRoom(S01, start, rand(4)); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } // fill used grid spaces floatingRoom = Room.CreateRoom(S02, end, rand(4); usedRooms.Add(floatingRoom); for each Door in floatingRoom.doors{ openDoors.Add(door); } Vector3 nRoomLocation; Door workingDoor; string workingRoom; // fill used grid spaces // pick random door on the openDoors list workingDoor = /*randomDoor*/ // get a random rule nRoomLocation = workingDoor.position; // then I'm lost } } I know that I have to make sure for convergence (namely the end is reachable), and to do this until there are no more doors on the openDoors list. right now I am simply trying to get this to work in 2D (there are rules that introduce 3D), but I am working on a presumption that a rigorous algorithm can be trivially extended to 3D. EDIT: my thought pattern so far is to take an existing open door and then pick a random room (restrictions can be put in later) place that room's center at the doors location move the room in the direction of the doors orientation half the rooms dimension w/respect to that axis then test against the 3D array to see if all the grid points are open, or have been used, or if there is even space to put the room (caseEdge) if caseEdge (which can also occur in between rooms) then put the door on a toBeClosed list, and remove it from the open list (placing a wall or something there). then to do some kind of test that both the start, and the goal are connected, and reachable from each other (each room has nodes for AI, but I don't want to "have" to pull those out to accomplish this). but this logic has the problem for say the U, or L shaped rooms in my example, and then I also have a problem conceptually if the room needs to be rotated.

    Read the article

  • How to make a stack stable? Need help for an explicit resting contact scheme (2-dimensional)

    - by Register Sole
    Previously, I struggle with the sequential impulse-based method I developed. Thanks to jedediah referring me to this paper, I managed to rebuild the codes and implement the simultaneous impulse based method with Projected-Gauss-Seidel (PGS) iterative solver as described by Erin Catto (mentioned in the reference of the paper as [Catt05]). So here's how it currently is: The simulation handles 2-dimensional rotating convex polygons. Detection is using separating-axis test, with a SKIN, meaning closest points between two polygons is detected and determined if their distance is less than SKIN. To resolve collision, simultaneous impulse-based method is used. It is solved using iterative solver (PGS-solver) as in Erin Catto's paper. Error-correction is implemented using Baumgarte's stabilization (you can refer to either paper for this) using J V = beta/dt*overlap, J is the Jacobian for the constraints, V the matrix containing the velocities of the bodies, beta an error-correction parameter that is better be < 1, dt the time-step taken by the engine, and overlap, the overlap between the bodies (true overlap, so SKIN is ignored). However, it is still less stable than I expected :s I tried to stack hexagons (or squares, doesn't really matter), and even with only 4 to 5 of them, they would swing! Also note that I am not looking for a sleeping scheme. But I would settle if you have any explicit scheme to handle resting contacts. That said, I would be more than happy if you have a way of treating it generally (as continuous collision, instead of explicitly as a special state). Ideas I have tried: Using simultaneous position based error correction as described in the paper in section 5.3.2, turned out to be worse than the current scheme. If you want to know the parameters I used: Hexagons, side 50 (pixels) gravity 2400 (pixels/sec^2) time-step 1/60 (sec) beta 0.1 restitution 0 to 0.2 coeff. of friction 0.2 PGS iteration 10 initial separation 10 (pixels) mass 1 (unit is irrelevant for now, i modified velocity directly<-impulse method) inertia 1/1000 Thanks in advance! I really appreciate any help from you guys!! :) EDIT In response to Cholesky's comment about warm starting the solver and Baumgarte: Oh right, I forgot to mention! I do save the contact history and the impulse determined in this time step to be used as initial guess in the next time step. As for the Baumgarte, here's what actually happens in the code. Collision is detected when the bodies' closest distance is less than SKIN, meaning they are actually still separated. If at this moment, I used the PGS solver without Baumgarte, restitution of 0 alone would be able to stop the bodies, separated by a distance of ~SKIN, in mid-air! So this isn't right, I want to have the bodies touching each other. So I turn on the Baumgarte, where its role is actually to pull the bodies together! Weird I know, a scheme intended to push the body apart becomes useful for the reverse. Also, I found that if I increase the number of iteration to 100, stacks become much more stable, though the program becomes so slow. UPDATE Since the stack swings left and right, could it be something is wrong with my friction model? Current friction constraint: relative_tangential_velocity = 0

    Read the article

  • WCF service dataContractSerializer maxItemsInObjectGraph in web.config

    - by Dave
    I am having issues specifying the dataContractSerializer maxItemsInObjectGraph in host's web.config. <behaviors> <serviceBehaviors> <behavior name="beSetting"> <serviceMetadata httpGetEnabled="True"/> <serviceDebug includeExceptionDetailInFaults="True" /> <dataContractSerializer maxItemsInObjectGraph="2147483646"/> </behavior> </serviceBehaviors> </behaviors> <services> <service name="MyNamespace.MyService" behaviorConfiguration="beSetting" > <endpoint address="http://localhost/myservice/" binding="webHttpBinding" bindingConfiguration="webHttpBinding1" contract="MyNamespace.IMyService" bindingNamespace="MyNamespace"> </endpoint> </service> </services> The above has no effect on my data pull. The server times out because of the large volume of data. I can however specify the max limit in code and that works [ServiceBehavior(MaxItemsInObjectGraph=2147483646, IncludeExceptionDetailInFaults = true)] public abstract class MyService : MyService { blah... } Does anyone know why I can't make this work through a web.config setting? I would like to keep in the web.config so it is easier for future updates.

    Read the article

  • JQuery Json error: Object doesn't support this property or method

    - by Abu Hamzah
    ERROR: Microsoft JScript runtime error: Object doesn't support this property or method i am using WCF service to pull the data and its very simple for the purpose of test and it does returning me the data from wcf service but it fails on json2.js on line number 314-316 // We split the first stage into 4 regexp operations in order to work around // crippling inefficiencies in IE's and Safari's regexp engines. First we // replace all backslash pairs with '@' (a non-JSON character). Second, we // replace all simple value tokens with ']' characters. Third, we delete all // open brackets that follow a colon or comma or that begin the text. Finally, // we look to see that the remaining characters are only whitespace or ']' or // ',' or ':' or '{' or '}'. If that is so, then the text is safe for eval. if (/^[\],:{}\s]*$/.test(text.replace(/\\["\\\/bfnrtu]/g, '@'). replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g, ']'). replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) { here is what i am doing function serviceProxy(wjOrderServiceURL) { var _I = this; this.ServiceURL = wjOrderServiceURL; // *** Call a wrapped object this.invoke = function (options) { // Default settings var settings = { serviceMethod: '', data: null, callback: null, error: null, type: "POST", processData: false, contentType: "application/json", dataType: "text", bare: false }; if (options) { $.extend(settings, options); } function GetFederalHolidays() { $("#dContacts1").empty().html('Searching for Active Contacts...'); ContactServiceProxy.invoke({ serviceMethod: "Holidays", callback: function (response) { // ProcessActiveContacts1(response); debugger }, error: function (xhr, errorMsg, thrown) { postErrorAndUnBlockUI(xhr, errorMsg, thrown); } }); } any help what iam doing wrong? i try to change from dataType: text to json but i get the same error above.

    Read the article

  • Sharepoint - Passing parameters in URL to NewForm.aspx

    - by kevin
    Any suggestions would be great. I've inherited a system and have been requested to add a context menu item to allow the ability to add a new item. I've set up the context menu with the new option and the newform.aspx to accept and pull parameters from the url for populating some fields. The context menu was created with the content editor web part and the follow JavaScript. function Custom_AddDocLibMenuItems(m, ctx) { var strAction = "window.location='http://address?para1=....'"; var strDisplayText = "Taxonomy"; var strImagePath = ""; // Add our new menu item CAMOpt(m, strDisplayText, strAction, strImagePath); // add a separator to the menu CAMSep(m); // false means that the standard menu items should also be rendered return false; } I'm having difficulties getting the proper values in order to concatenate them to the strAction string (which would be the full url of the newform.aspx. Any suggestions on how I retrieve the values of the columns for the item that the user right click to get context menu.

    Read the article

  • close long poll connection, jQuery-ajax

    - by MyGGaN
    Background I use a Tornado-like server with support for long-polls. Each web pages a user clicks around to sets up a long poll to the server like this: $.ajax({ type: 'GET', url: "/mylongpollurl/", dataType: 'application/json', success: function(json) { // I do stuff here }, error: function(xhr, errText, ex) { // If timeout I send a new long-poll request } }); Problem I will now rely on data that I get from Fiddler monitoring all requests made from my browser (FF at the moment). Page 1 is loaded and the long poll request is made, now idling at server side. I click a link to page 2 and that page is loaded and setting up a long poll request, BUT the long poll request from page 1 is still idling at server side (according to Fiddler). This means that I will stack all long poll calls when clicking around the page, thus end up with lots of active connections on the server (or are they maybe sharing connection?) My thoughts - As it's a Tornado-like server (using epoll) it can handle quite a lot of connections. But this fact is not to exploit in my opinion. What I mean is that I prefer not to have a timeout on the server for this case (were the client disappears). - I know those stand alone pages better uses a common head and only swap content via ajax calls but this design we use today was not my call... - The best way to solve this would probably be to have the connection reused (hard to pull off I think) or closed as soon as the browser leaves the page (you click to another page). Thanks -- MyGGaN

    Read the article

  • Return template as string - Django

    - by Ninefingers
    Hi All, I'm still not sure this is the correct way to go about this, maybe not, but I'll ask anyway. I'd like to re-write wordpress (justification: because I can) albeit more simply myself in Django and I'm looking to be able to configure elements in different ways on the page. So for example I might have: Blog models A site update message model A latest comments model. Now, for each page on the site I want the user to be able to choose the order of and any items that go on it. In my thought process, this would work something like: class Page(models.Model) Slug = models.CharField(max_length=100) class PageItem(models.Model) Page = models.ForeignKey(Page) ItemType = models.CharField(max_length=100) InstanceNum = models.IntegerField() # all models have primary keys. Then, ideally, my template would loop through all the PageItems in a page which is easy enough to do. But what if my page item is a site update as opposed to a blog post? Basically, I am thinking I'd like to pull different item types back in different orders and display them using the appropriate templates. Now, I thought one way to do this would be to, in views.py, to loop through all of the objects and call the appropriate view function, return a bit of html as a string and then pipe that into the resultant template. My question is - is this the best way to go about doing things? If so, how do I do it? If not, which way should I be going? I'm pretty new to Django so I'm still learning what it can and can't do, so please bear with me. I've checked SO for dupes and don't think this has been asked before...

    Read the article

  • Creating an IIS 6 Virtual Directory with PowerShell v2 over WMI/ADSI

    - by codepoke
    I can create an IISWebVirtualDir or IISWebVirtualDirSetting with WMI, but I've found no way to turn the virtual directory into an IIS Application. The virtual directory wants an AppFriendlyName and a Path. That's easy because they're part of the ...Setting object. But in order to turn the virtual directory into an App, you need to set AppIsolated=2 and AppRoot=[its root]. I cannot do this with WMI. I'd rather not mix ADSI and WMI, so if anyone can coach me through to amking this happen in WMI I'd be very happy. Here's my demo code: $server = 'serverName' $site = 'W3SVC/10/ROOT/' $app = 'AppName' # If I use these args, the VirDir is not created at all. Fails to write read-only prop # $args = @{'Name'=('W3SVC/10/ROOT/' + $app); ` # 'AppIsolated'=2;'AppRoot'='/LM/' + $site + $app} # So I use this single arg $args = @{'Name'=($site + $app)} $args # Look at the args to make sure I'm putting what I think I am $v = set-wmiinstance -Class IIsWebVirtualDir -Authentication PacketPrivacy ` -ComputerName $server -Namespace root\microsoftiisv2 -Arguments $args $v.Put() # VirDir now exists # Pull the settings object for it and prove I can tweak it $filter = "Name = '" + $site + $app + "'" $filter $v = get-wmiobject -Class IIsWebVirtualDirSetting -Authentication PacketPrivacy ` -ComputerName $server -Namespace root\microsoftiisv2 -Filter $filter $v.AppFriendlyName = $app $v.Put() $v # Yep. Changes work. Goes without saying I cannot change AppIsolated or AppRoot # But ADSI can change them without a hitch # Slower than molasses in January, but it works $a = [adsi]("IIS://$server/" + $site + $app) $a.Put("AppIsolated", 2) $a.Put("AppRoot", ('/LM/' + $site + $app)) $a.Put("Path", "C:\Inetpub\wwwroot\news") $a.SetInfo() $a Any thoughts?

    Read the article

  • Python, unit test - Pass command line arguments to setUp of unittest.TestCase

    - by sberry2A
    I have a script that acts as a wrapper for some unit tests written using the Python unittest module. In addition to cleaning up some files, creating an output stream and generating some code, it loads test cases into a suite using unittest.TestLoader().loadTestsFromTestCase() I am already using optparse to pull out several command-line arguments used for determining the output location, whether to regenerate code and whether to do some clean up. I also want to pass a configuration variable, namely an endpoint URI, for use within the test cases. I realize I can add an OptionParser to the setUp method of the TestCase, but I want to instead pass the option to setUp. Is this possible using loadTestsFromTestCase()? I can iterate over the returned TestSuite's TestCases, but can I manually call setUp on the TestCases? ** EDIT ** I wanted to point out that I am able to pass the arguments to setUp if I iterate over the tests and call setUp manually like: (options, args) = op.parse_args() suite = unittest.TestLoader().loadTestsFromTestCase(MyTests.TestSOAPFunctions) for test in suite: test.setUp(options.soap_uri) However, I am using xmlrunner for this and its run method takes a TestSuite as an argument. I assume it will run the setUp method itself, so I would need the parameters available within the XMLTestRunner. I hope this makes sense.

    Read the article

  • Parse html and find data in the html

    - by Dan.StackOverflow
    Hi all. I am trying to use html5lib to parse an html page in to something I can query with xpath. html5lib has close to zero documentation and I've spent too much time trying to figure this problem out. Ultimate goal is to pull out the second row of a table: <html> <table> <tr><td>Header</td></tr> <tr><td>Want This</td></tr> </table> </html> so lets try it: >>> doc = html5lib.parse('<html><table><tr><td>Header</td></tr><tr><td>Want This</td> </tr></table></html>', treebuilder='lxml') >>> doc <lxml.etree._ElementTree object at 0x1a1c290> that looks good, lets see what else we have: >>> root = doc.getroot() >>> print(lxml.etree.tostring(root)) <html:html xmlns:html="http://www.w3.org/1999/xhtml"><html:head/><html:body><html:table><html:tbody><html:tr><html:td>Header</html:td></html:tr><html:tr><html:td>Want This</html:td></html:tr></html:tbody></html:table></html:body></html:html> LOL WUT? seriously. I was planning on using some xpath to get at the data I want, but that doesn't seem to work. So what can I do? I am willing to try different libraries and approaches.

    Read the article

  • NSString variable out of scope in sub-class (iPhone/Obj-C)

    - by Rich
    I am following along with an example in a book with the code exactly as it is in the example code as well as from the book, and I'm getting an error at runtime. I'll try to describe the life cycle of this variable as good as I can. I have a controller class with a nested array that is populated with string literals (NSArray of NSArrays, the nested NSArrays initialized with arrayWithObjects: where the objects are all string literals - @"some string"). I access these strings with a helper method added via a category on NSArray (to pull strings out of a nested array). My controller gets a reference to this string and assigns it to a NSString property on a child controller using dot notation. The code looks like this (nestedObjectAtIndexPath is my helper method): NSString *rowKey = [rowKeys nestedObjectAtIndexPath:indexPath]; controller.keypath = rowKey; keypath is a synthesized nonatomic, retain property defined in a based class. When I hit a breakpoint in the controller at the above code, the NSString's value is as expected. When I hit the next breakpoint inside the child controller, the object id of the keypath property is the same as before, but instead of showing me the value of the NSString, XCode says that the variable is "out of scope" which is also the error I see in the console. This also happens in another sub-class of the same parent. I tried googling, and I saw slightly similar cases where people were suggesting this had to do with retain counts. I was under the impression that by using dot notation on a synthesized property, my class would be using an "auto generated accessor" that would be increasing my retain count for me so that I wouldn't have this problem. Could there be any implications because I'm accessing it in a sub-class and the prop is defined in the parent? I don't see anything in the book's errata about this, but the book is relatively new (Apress - More iPhone 3 Dev). I also have double checked that my code matches the example 100 times.

    Read the article

  • Windows 7 - Enable Network DTC Access

    - by Russ Clark
    I have a Visual Studio 2010 Windows Forms application in which I start a transaction using the TransactionScope class. I then Receive a message from a Sql Server Broker Services message queue, which works fine. I next try to call a stored procedure from the same database with a call to my data access layer which is a Visual Studio dataset (xsd file). When I make this second call to the database I get the following error message: The MSDTC transaction manager was unable to pull the transaction from the source transaction manager due to communication problems. Possible causes are: a firewall is present and it doesn't have an exception for the MSDTC process, the two machines cannot find each other by their NetBIOS names, or the support for network transactions is not enabled for one of the two transaction managers. (Exception from HRESULT: 0x8004D02B). I've seen several posts on the web that talk about Enabling DTC access through dcomcnfg.exe, and allowing DTC to communicate through Windows Firewall. I've done those things, and am still having this problem. I know our remote database server is setup to Enable DTC access, because we are using similar transactions in other projects built with Visual Studio 2008 on Windows XP and Vista. I think there is something specific about Windows 7 and Visual Studio 2010 causing this problem, but haven't been able to find out what it is. Can anyone help with this problem?

    Read the article

  • How do I solve column width problems in a SSRS Tablix?

    - by David Stein
    I'm creating a simple report from Microsoft Dynamics CRM. When I pull in the following dataset: SELECT FQD.productidname , FQD.NEW_PRICEBREAKS , FQD.NEW_WEEKSARO , ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc , FQD.productdescription , FQD.quoteid , FQD.quantity , FQD.productiddsc , FQD.baseamount , FQD.lineitemnumber , FQD.priceperunit , FQD.extendedamount , ISNULL(FP.productnumber, '') AS productnumber , ISNULL(FQD.uomidname, '-') AS Unit , FQD.tax AS Tax , FQD.volumediscountamount * FQD.quantity AS Discount , FQD.manualdiscountamount AS MDiscount , FQD.quotedetailid , FQD.crm_moneyformatstring , FQD.NEW_PRICEPERUNIT , FQD.NEW_PRICEPERUNIT_BASE FROM FilteredQuoteDetail FQD LEFT OUTER JOIN FilteredProduct FP ON FQD.productid = FP.productid WHERE (FQD.quoteid = @CRM_QuoteId) The NewProductDesc field is too wide. If I shorted it in design view, it still comes out too wide in the presentation. I think the field is coming out that wide because the database field probably has a bunch of blank spaces at the end of every description. I could not find a way to force that field in the Tablix not to grow horizontally, so I attempted to remedy it in the dataset by replacing the NewProductDesc line with: ltrim(rtrim(FP.NEW_PRODUCTNAME)) AS NewProductDesc However, that has no effect either. Can anyone suggest why this behavior is occuring? Can anyone tell me how I can force the field not to grow horizontally?

    Read the article

  • Getting MSDeploy working on our build/integration server - Is an MSBuild upgrade necessary?

    - by Jeff D
    We have what I think is a fairly standard build process: 1. Developer: Check in code 2. Build: Polls repo, sees change, and kicks off build that: 3. Build: Updates from repo, Builds w/ MSBuild, Runs unit tests w/ nunit, 4. Build: creates installer package Our security team allows us to pull from the build server, but does not allow the build server to push. So we generally rdp in, d/l the installers, and run them, which rules out the slick deployment services, so I would need to generate packages instead. I'd like to use MSDeploy, except that we have the following issues: We're on .net 3.5, and the MSBuild target (Package) that uses MSDeploy requires 4.0. Is there anything I'd need to install other than .net 4.0 RC for this? (Would MSBuild be part of that upgrade?) When I generate packages with MSDeploy, I see that I don't have just 1 file. There's a zip, deploy.cmd, SourceManifest.xml, and SetParameters.xml. What are all the other files for, and why wouldn't they all be in the 'package'? It sounds as if you can create packages by telling the system to look at a working IIS site. But if the packages are build from a CI environment, aren't you basically out of luck here? It feels like they designed some of this for small-scale developers deploying from their dev environment. That's a fine use case, but I'm interested in see what everyone's enterprise-experience is with the tool Any suggestions?

    Read the article

  • SSRS for Sharepoint, Images in a report from a Sharepoint List URL?

    - by James Polhemus
    Greetings Sabios, I have several reports I run successfully where the data comes from a Sharepoint list in the form of an XML dataset. I am however having trouble with one. I have a report that pulls an image file onto the main body of the report. This data too comes from a Sharepoint list in the form of an XML dataset which sends me the URL to the jpeg or bmp or gif... whatever the case may be. I can successfully pull this off in my own Visual Studio IDE. My Local Report Server will render it as well It won't run on my Sharepoint Report Server (My MOSS runs through https while my Shartpoint Report Server is http might this matter?) When I upload it to Sharepoint and run it through the Sharepoint Report Server, I get back EVERYTHING in the report Header and Footer (dataset text and embedded Images) but just a big RED X where the Main Image should be. I have done everything the boards say: A. I made sure the Unattended Execution Account is running on the Reports Server B. I have insured the URL comes back in clean format (else the images wouldn't render locally either and they do) The report logs throw this exception: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it., ; Info: Microsoft.ReportingServices.Diagnostics.Utilities.ContainerTypeNotSupportedException: The target location you specified is not supported by the report server. A report definition (.rdl), report model (.smdl), resource, or shared data source (.rsds) file must be located within a library or a folder within it. Any takers? Even my Sharepoint Administrator can't help me:) James

    Read the article

  • Google Datastore w/ JDO: Access Times?

    - by Bosh
    I'm hitting what appears (to me) strange behavior when I pull data from the google datastore over JDO. In particular, the query executes quickly (say 100 ms), but finding the size of the resulting List< takes about one second! Indeed, whatever operation I try to perform on the resulting list takes about a second. Has anybody seen this behavior? Is it expected? Unusual? Any way around it? PersistenceManager pm = PMF.getPersistenceManager(); Query q = pm.newQuery("select from " + Person.class.getName() +" order by key limit 1000 "); System.out.println("getting all at " + System.currentTimeMillis()); mcs = (List<Med>) q.execute(); System.out.println("got all at " + System.currentTimeMillis()); int size = mcs.size(); System.out.println("size was " + size + " at " + System.currentTimeMillis()); getting all at 1271549139441 got all at 1271549139578 size was 850 at 1271549141071 -B

    Read the article

  • Can I encrypt web.config with a custom protection provider who's assembly is not in the GAC?

    - by James
    I have written a custom protected configuration provider for my web.config. When I try to encrypt my web.config with it I get the following error from aspnet_iisreg aspnet_regiis.exe -pef appSettings . -prov CustomProvider (This is running in my MSBuild) Could not load file or assembly 'MyCustomProviderNamespace' or one of its dependencies. The system cannot find the file specified. After checking with the Fusion log, I confirm it is checking both the GAC, and 'C:/WINNT/Microsoft.NET/Framework/v2.0.50727/' (the location of aspnet_iisreg). But it cannot find the provider. I do not want to move my component into the GAC, I want to leave the custom assembly in my ApplicationBase to copy around to various servers without having to pull/push from the GAC. Here is my provider configuration in the web.config. <configProtectedData> <providers> <add name="CustomProvider" type="MyCustomProviderNamespace.MyCustomProviderClass, MyCustomProviderNamespace" /> </providers> </configProtectedData> I want aspnet_iisreg to check my ApplicationBase Bin folder for this assembly. Has anyone got any ideas?

    Read the article

  • HL7 parsing in PHP

    - by Narcissus
    Hi all, I'm looking into options for parsing HL7 messages via PHP. I'm aware of the Net_HL7 package on PEAR but to be perfectly honest, I don't think that I want to base my code around a seemingly 'abandoned' package and even if I did, I just don't think that my brain suits the functions 'correctly'. Maybe if I had more of an HL7 background it would make a bit more sense, I don't know. Anyway: I'm guessing that 95% of the time, I'm going to be parsing and reading data from messages. The other 5%, I'll be creating and/or sending messages. I don't necessarily need to do any form of validation on the messages themselves, I just need to pull/push data. I definitely need support for 'non-XML' HL7 v2.x, but naturally XML-based v2 and v3 would be a bonus. So does anyone have any suggestions as to other libraries that I might use? I'm looking for pure PHP solutions as I want to have minimal requirements on the server that aren't "copy this directory here". Thanks!

    Read the article

  • C# Winforms TabControl elements reading as empty until TabPage selected

    - by Geo Ego
    I have Winform app I am writing in C#. On my form, I have a TabControl with seven pages, each full of elements (TextBoxes and DropDownLists, primarily). I pull some information in with a DataReader, populate a DataTable, and use the elements' DataBindings.Add method to fill those elements with the current values. The user is able to enter data into these elements, press "Save", and I then set the parameters of an UPDATE query using the elements' Text fields. For instance: updateCommand.Parameters.Add("@CustomerName", SqlDbType.VarChar, 100).Value = CustomerName.Text; The problem I have is that once I load the form, all of the elements are apparently considered empty until I select each tab manually. Thus, if I press "Save" immediately upon loading the form, all of the fields on the TabPages that I have not yet selected try to UPDATE with empty data (not nice). As I select each TabPage, those elements will now send their data along properly. For the time being, I've worked out a (very) ugly workaround where I programmatically select each TabPage when the data is populated for the first time, but that's an unacceptable long-term solution. My question is, how can I get all of the elements on the TabPages to return their data properly before the user selects that TabPage?

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >