Search Results

Search found 19093 results on 764 pages for 'max path'.

Page 369/764 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • How do I mount a CIFS share via FSTAB and give full RW to Guest

    - by Kendor
    I want to create a Public folder that has full RW access. The problem with my configuration is that Windows users have no issues as guests (they can RW and Delete), my Ubuntu client can't do the same. We can only write and read, but not create or delete. Here is the my smb.conf from my server: [global] workgroup = WORKGROUP netbios name = FILESERVER server string = TurnKey FileServer os level = 20 security = user map to guest = Bad Password passdb backend = tdbsam null passwords = yes admin users = root encrypt passwords = true obey pam restrictions = yes pam password change = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . add user script = /usr/sbin/useradd -m '%u' -g users -G users delete user script = /usr/sbin/userdel -r '%u' add group script = /usr/sbin/groupadd '%g' delete group script = /usr/sbin/groupdel '%g' add user to group script = /usr/sbin/usermod -G '%g' '%u' guest account = nobody syslog = 0 log file = /var/log/samba/samba.log max log size = 1000 wins support = yes dns proxy = no socket options = TCP_NODELAY panic action = /usr/share/samba/panic-action %d [homes] comment = Home Directory browseable = no read only = no valid users = %S [storage] create mask = 0777 directory mask = 0777 browseable = yes comment = Public Share writeable = yes public = yes path = /srv/storage The following FSTAB entry doesn't yield full R/W access to the share. //192.168.0.5/storage /media/myname/TK-Public/ cifs rw 0 0 This doesn't work either //192.168.0.5/storage /media/myname/TK-Public/ cifs rw,guest,iocharset=utf8,file_mode=0777,dir_mode=0777,noperm 0 0 Using the following location in Nemo/Nautilus w/o the Share being mounted does work: smb://192.168.0.5/storage/ Extra info. I just noticed that if I copy a file to the share after mounting, my Ubuntu client immediately make "nobody" be the owner, and the group "no group" has read and write, with everyone else as read-only. What am I doing wrong?

    Read the article

  • questions on a particular algorithm

    - by paul smith
    Upon searching for a fast primr algorithm, I stumbled upon this: public static boolean isP(long n) { if (n==2 || n==3) return true; if ((n&0x1)==0 || n%3==0 || n<2) return false; long root=(long)Math.sqrt(n)+1L; // we check just numbers of the form 6*k+1 and 6*k-1 for (long k=6;k<=root;k+=6) { if (n%(k-1)==0) return false; if (n%(k+1)==0) return false; } return true; } My questions are: Why is long being used everywhere instead of int? Because with a long type the argument could be much larger than Integer.MAX thus making the method more flexible? In the second 'if', is n&0x1 the same as n%2? If so why didn't the author just use n%2? To me it's more readable. The line that sets the 'root' variable, why add the 1L? What is the run-time complexity? Is it O(sqrt(n/6)) or O(sqrt(n)/6)? Or would we just say O(n)?

    Read the article

  • Latest update to Ubuntu 13.10 broke Intel graphics drivers

    - by James Davies
    I'm running a copy of Ubuntu 13.10 on an i7-4771 w/ Intel HD4600 Graphics using a Dell Ultrasharp 1440p monitor via Displayport. Up until today this configuration has been working perfectly, however the latest update appears to have broken my graphics configuration, and xorg is now refusing to go above 1280p resolution. Running xrandr it appears the driver incorrectly thinks my monitor is plugged into the HDMI port and is detecting a max resolution of 1920x1200 instead of 2560x1440. (It's actually plugged in via Displayport). Based on the apt history.log, the latest update was for the kernel. I'm presuming the issue is that the official Intel driver hasn't been updated to support this version? Is there any way to resolve this, or will I need to upgrade to 14.10 to get the latest driver from Intel? Start-Date: 2014-05-28 11:30:57 Commandline: aptdaemon role='role-commit-packages' sender=':1.473' Install: linux-image-extra-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-image-3.11.0-22-generic:amd64 (3.11.0-22.38), linux-headers-3.11.0-22:amd64 (3.11.0-22.38), linux-headers-3.11.0-22-generic:amd64 (3.11.0-22.38)

    Read the article

  • Server-Sent Events using GlassFish (TOTD #179)

    - by arungupta
    Bhakti blogged about Server-Sent Events on GlassFish and I've been planning to try it out for past some days. Finally, I took some time out today to learn about it and build a simplistic example showcasing the touch points. Server-Sent Events is developed as part of HTML5 specification and provides push notifications from a server to a browser client in the form of DOM events. It is defined as a cross-browser JavaScript API called EventSource. The client creates an EventSource by requesting a particular URL and registers an onmessage event listener to receive the event notifications. This can be done as shown var url = 'http://' + document.location.host + '/glassfish-sse/simple';eventSource = new EventSource(url);eventSource.onmessage = function (event) { var theParagraph = document.createElement('p'); theParagraph.innerHTML = event.data.toString(); document.body.appendChild(theParagraph);} This code subscribes to a URL, receives the data in the event listener, adds it to a HTML paragraph element, and displays it in the document. This is where you'll parse JSON and other processing to display if some other data format is received from the URL. The URL to which the EventSource is subscribed to is updated on the server side and there are multipe ways to do that. GlassFish 4.0 provide support for Server-Sent Events and it can be achieved registering a handler as shown below: @ServerSentEvent("/simple")public class MySimpleHandler extends ServerSentEventHandler { public void sendMessage(String data) { try { connection.sendMessage(data); } catch (IOException ex) { . . . } }} And then events can be sent to this handler using a singleton session bean as shown: @Startup@Statelesspublic class SimpleEvent { @Inject @ServerSentEventContext("/simple") ServerSentEventHandlerContext<MySimpleHandler> simpleHandlers; @Schedule(hour="*", minute="*", second="*/10") public void sendDate() { for(MySimpleHandler handler : simpleHandlers.getHandlers()) { handler.sendMessage(new Date().toString()); } }} This stateless session bean injects ServerSentEventHandlers listening on "/simple" path. Note, there may be multiple handlers listening on this path. The sendDate method triggers every 10 seconds and send the current timestamp to all the handlers. The client side browser simply displays the string. The HTTP request headers look like: Accept: text/event-streamAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3Accept-Encoding: gzip,deflate,sdchAccept-Language: en-US,en;q=0.8Cache-Control: no-cacheConnection: keep-aliveCookie: JSESSIONID=97ff28773ea6a085e11131acf47bHost: localhost:8080Referer: http://localhost:8080/glassfish-sse/faces/index2.xhtmlUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.54 Safari/536.5 And the response headers as: Content-Type: text/event-streamDate: Thu, 14 Jun 2012 21:16:10 GMTServer: GlassFish Server Open Source Edition 4.0Transfer-Encoding: chunkedX-Powered-By: Servlet/3.0 JSP/2.2 (GlassFish Server Open Source Edition 4.0 Java/Apple Inc./1.6) Notice, the MIME type of the messages from server to the client is text/event-stream and that is defined by the specification. The code in Bhakti's blog can be further simplified by using the recently-introduced Twitter API for Java as shown below: @Schedule(hour="*", minute="*", second="*/10") public void sendTweets() { for(MyTwitterHandler handler : twitterHandler.getHandlers()) { String result = twitter.search("glassfish", String.class); handler.sendMessage(result); }} The complete source explained in this blog can be downloaded here and tried on GlassFish 4.0 build 34. The latest promoted build can be downloaded from here and the complete source code for the API and implementation is here. I tried this sample on Chrome Version 19.0.1084.54 on Mac OS X 10.7.3.

    Read the article

  • 3d Picking under reticle

    - by Wolftousen
    i'm currently trying to work out some 3d picking code that I started years ago, but then lost interested the assignment was completed (this part wasn't actually part of the assignment). I am not using the mouse coords for picking, i'm just using the position in 3d space and a ray directly out from there. A small hitch though is that I want to use a cone and not a ray. Here are the variables i'm using: float iReticleSlope = 95/3000; //inverse reticle slope float baseReticle = 1; //radius of the reticle at z = 0 float maxRange = 3000; //max range to target Quaternion orientation; //the cameras orientation Vector3d position; //the cameras position Then I loop through each object in the world: Vector3d transformed; //object position after transformations float d, r; //holder variables for(i = 0; i < objects.length; i++) { transformed = position - objects[i].position; //transform the position relative to camera orientation.multiply(transformed); //orient the object relative to the camera if(transformed.z < 0) { d = sqrt(transformed[0] * transformed[0] + transformed[1] * transformed[1]); r = -transformed[2] * iReticleSlope + objects[i].radius; if(d < r && -transformed[2] - objects[i].radius <= maxRange) { //the object is under the reticle } else { //the object is not under the reticle } } else { //the object is not under the reticle } } Now this all works fine and dandy until the window ratio doesn't match the resolution ratio. Is there any simple way to account for that

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • Flow Fields in Dynamics NAV

    - by T
    I don’t know the exact business reason but someone asked how to identify all sales shipping orders with any negative quantities on them.  They needed to print these separately from the shipments that only have all positive quantities. Here is one way to solve this problem In NAV, open Sales Shipment Header Add a field of type Boolean (The field type should match the value you expect to return from the flow field method.  In this case we will use an exist so we expect a bool but we could have used a Sum, Average, Min, Max, Count, or a Lookup to return a value). Define it as a flow field and click on the ellipse next to CalcFormula Now [Has Negative Qty] is false if no negatives exist And [Has Negative Qty] and true if a negative exist So it is ready to be used as a filter on the report. Don’t forget that if you are using it in code, you may need to CalcFields. Hope that helps.  If you really can’t afford the field in your header, you can use code to check the lines for a negative value each time and use a skip or break function to skip that header record but it seems expensive to check them all if you only want a few to print. Please let me know if you think of a better solution.

    Read the article

  • Select Data From XML in MS SQL Server (T-SQL)

    - by Doug Lampe
    So you have used XML to give you some schema flexibility in your database, but now you need to get some data out.  What do you do?  The solution is relatively  simple:   DECLARE @iDoc INT /* Stores a pointer to the XML document */ DECLARE @XML VARCHAR(MAX) /* Stores the content of the XML */   set @XML = (SELECT top 1 Xml_Column_Name FROM My_Table where Primary_Key_Column = 'Some Value')   EXEC sp_xml_preparedocument @iDoc OUTPUT, @XML   SELECT * FROM OPENXML(@iDoc,'/some/valid/xpath',2)                      WITH (output_column1_name varchar(50)  'xml_node_name1',                                                     output_column2_name varchar(50)  'xml_node_name2')   EXEC sp_xml_removedocument @iDoc   In this example, the XML data would look something like this:   <some>   <valid>     <xpath>       <xml_node_name1>Value1</xml_node_name1>       <xml_node_name2>Value2</cml_node_name2>     </xpath>   </valid> </some>   The resulting query should give you this:   output_column1_name    output_column2_name ------------------------------------------ Value1                 Value2   Note that in this example we are only looking at a single record at a time.  You could use a cursor to iterate through multiple records and insert the XML data into a temporary table.

    Read the article

  • About C# objects and the possibilties it has

    - by user527825
    As a novice programmer and I always wonder about c# capabilities. I know it is still early to judge that but all i want to know is can c# do complex stuffs or something outside windows OS. I think C# is a proprietary language (I don't know if I said that right) meaning you can't do it outside Visual Studio or Windows. Also you can't create your own controller (called object right?) like you are forced to use these available in toolbox and their properties and methods. Can C# be used with OpenGL API or DirectX API Finally it always bothers me when I think I start doing things in Visual Studio, I know it sounds arrogant to say but sometimes I feel that I don't like to be forced to use something even if its helpful, like I feel (do I have the right to feel?) that I want to do all things by myself? Don't laugh I just feel that this will give me a better understanding. Is Visual C# like using MaxScript inside 3ds max in that C# is exclusive to do Windows and Forms and Components that are Windows related and maxscript is only for 3d editing and manipulation for various things in the software. If it is too difficult for a beginner I hope you don't answer the fourth question as I don't have enough motivation and I want to keep the little I have. Note: Sorry for my English, I am self taught and never used the language with native speakers so expect so errors. I have a lot of questions regarding many things, what is the daily ratio you think for asking (number of questions) that would not bother the admins of the site and the members here. Thank you for your time.

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • Why is this beat detection code failing to register some beats properly?

    - by Quincy
    I made this SoundAnalyzer class to detect beats in songs: class SoundAnalyzer { public SoundBuffer soundData; public Sound sound; public List<double> beatMarkers = new List<double>(); public SoundAnalyzer(string path) { soundData = new SoundBuffer(path); sound = new Sound(soundData); } // C = threshold, N = size of history buffer / 1024 B = bands public void PlaceBeatMarkers(float C, int N, int B) { List<double>[] instantEnergyList = new List<double>[B]; GetEnergyList(B, ref instantEnergyList); for (int i = 0; i < B; i++) { PlaceMarkers(instantEnergyList[i], N, C); } beatMarkers.Sort(); } private short[] getRange(int begin, int end, short[] array) { short[] result = new short[end - begin]; for (int i = 0; i < end - begin; i++) { result[i] = array[begin + i]; } return result; } // get a array of with a list of energy for each band private void GetEnergyList(int B, ref List<double>[] instantEnergyList) { for (int i = 0; i < B; i++) { instantEnergyList[i] = new List<double>(); } short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; int samplesPerBand = nextSamples / B; // for the whole song while (sampleIndex + nextSamples < samples.Length) { complex[] FFT = FastFourier.Calculate(getRange(sampleIndex, nextSamples + sampleIndex, samples)); // foreach band for (int i = 0; i < B; i++) { double energy = 0; for (int j = 0; j < samplesPerBand; j++) energy += FFT[i * samplesPerBand + j].GetMagnitude(); energy /= samplesPerBand; instantEnergyList[i].Add(energy); } if (sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; samplesPerBand = nextSamples / B; } } // place the actual markers private void PlaceMarkers(List<double> instantEnergyList, int N, float C) { double timePerSample = 1 / (double)soundData.SampleRate; int index = N; int numInBuffer = index; double historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } } } For some reason it's only detecting beats from 637 sec to around 641 sec, and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates, and it seems that it's assigning a beat to each instant energy value in between those values. It's modeled after this: http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats register properly?

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Amazon Careers website - are resumes processed in plain text format only?

    - by sapphiremirage
    The submission site has the following options: "Please upload your resume (Word Document, max size: 512 KB) OR Please copy and paste the text version of your file here", with a text box below the latter option. I went ahead and uploaded my shiny LaTeX resume (as a PDF), despite the fact that they seem to want a Word Document, and there didn't seem to be any issues. However, when I went back to edit my profile, there was no evidence that my PDF had been uploaded, other than a text version of my resume, awfully formatted and clearly stripped from the PDF, sitting in the text box below "Please copy and paste the text version of your file here". Exasperated, I did a quick and dirty copy of the text from my resume into a Word doc and uploaded that. Same result: no evidence of a file uploaded, just a stripped text version in the textbox. What I'm wondering now is, are they only going to look at the text version of my resume? If that's the case then I'm obviously going to edit it so that it looks halfway decent and doesn't contain such atrocities from the conversion as "Other Skills: LTEX". I can pretty little text files without too much effort, so this isn't that big of deal. However, my LaTeX resume is going to look better than anything I can do in plain text, so if the site is actually keeping a copy of that, then I certainly don't want to override it. Has anyone here either gone through the Amazon hiring process or interviewed candidates and know how this works? (i.e. When on site with Amazon, did the interviewers have diversely formatted resumes, or did they all look suspiciously similar)

    Read the article

  • Emulate Historical Figures i.e. Einstein - Is this possible using linguistic logic for my http://www.ustimeline.com Education System

    - by Johnnylight
    After hearing about the success of IBM's Watson I started thinking perhaps emulating human language is now possible? My goal is to create Virtual Historical characters to represent the main characters in my Adventur-Cation The Great American Adventure program such as Einstein or Crazy Horse. The goal is to build an intelligent system capable of indexing the internet and storing the data using a schema using modern knowledge on linguistic theory (phonemes, morphemes, syntax) to build a system capable to returning a semantically sound response very similar to the response made by the same person if still alive today. The goal would be to use the same engine/system for all characters. Each characters would have their own digital representation and voice, and would organize data differently based on tags/keywords stored about the individual. Imagine a Max Headroom Einstein. Based on the success of Watson, I believe something like this may now be possible. Would be an interesting way to study history and would be a vehicle of entertainment as well. Can anyone confirm if this has already been attempted? Is anyone interested in exploring this using Cognitive Science, Psychology, Artificial Intelligence, Historical data captured on the internet, and Linguistic theory?

    Read the article

  • Migrating My GWB Blog Over To WordPress

    - by deadlydog
    Geeks With Blogs has been good to me, but there are too many tempting things about Word Press for me to stay with GWB.  Particularly their statistics features, and also I like that I can apply specific tags to my posts to make them easier to find in Google (and I never was able to get categories working on GWB). The one thing I don’t like about WordPress is I can’t seem to find a theme that stretches the Content area to take up the rest of the space on wide resolutions…..hopefully I’ll be able to overcome that obstacle soon though. For those who are curious as to how I actually moved all of my posts across, I ended up just using Live Writer to open my GWB posts, and then just changed it to publish to my WordPress blog.  This was fairly painless, but with my 27 posts it took probably about 2 hours of manual effort to do.  Most of the time WordPress was automatically able to copy the images over, but sometimes I had to save the pics from my GWB posts and then re-add them in Live Writer to my WordPress post.  I found another developers automated solution (in alpha mode), but opted to do it manually since I wanted to manually specify Categories and Tags on each post anyways.  The one thing I still have left to do is move the worthwhile comments across from the GWB posts to the new WordPress posts. The largest pain point was that with GWB I was using the Code Snippet Plugin for Live Writer for my source code snippets, and when they got transferred over to WordPress they looked horrible.  So I ended up finding a new Live Writer plugin called SyntaxHighlighter that looks even nicer on my posts If anybody knows of a nice WordPress theme that stretches the content area to fit the width of the screen, please let me know.  All of the themes I’ve found seem to have a max width set on them, so I end up with much wasted space on the left and right sides.  Since I post lots of code snippets, the more horizontal space the better! So if you want to continue to follow me, my blog's new home is now at https://deadlydog.wordpress.com

    Read the article

  • Learn More About the PO Approvals Analyzer

    - by LuciaC
    You may think that the PO Approvals Analyzer for Release 12 is only for diagnosing problems when you have a single Purchase Order or Requisition stuck in process, but it offers valuable information to keep your Procurement environment healthy.  Consider this:     The analyzer will list all Procurement critical patches that have not been applied.     It will provide Procurement invalid objects with error messages and provides solutions.     Validations of setup and database conditions for example max extents and space issues. Also the analyzer can be run on all Purchasing documents starting from a date you enter.  This multiple document check provides validations on:     Data corruption issues.     Workflow errors with generic messages i.e. document manager errors.     Documents with workflows in error that cannot be progressed via the application. And, unlike other diagnostics, the analyzer provides known solutions to the problems indicated! So access the Analyzer today and run it on your instance!  Access it now via Doc ID 1525670.1.

    Read the article

  • Auto-organized / smart inventory system?

    - by VeXe
    for the past week I've been working on an inventory system with Unity3D. At first I got help from the guys at Design3 but it wasn't too long till we split path, because I really didn't like the way they did their code, it didn't have any smell of OOP whatsoever. I took it further steps ahead - items take more than one slot, advanced placement system (items tries their best to find the best close fit), local mouse system (mouse gets trapped in active bag area), etc. Here's a demo of my work. What we would like to have in our game, is an auto-organizing feature - not auto-sort. We want this feature because our inventory's going to be in 'real-time' - not like in Resident Evil 1,2,3 etc where you would pause the game and do things in your inventory. Now imagine your self in a sticky situation surrounded by zombies, and you don't have bullets, you look around, you see that there are bullets nearby on the ground, so you go for them and try to pick them up, but they don't fit! you look at your inventory and find out that if you reorganize some of the items, it will fit! - now the player - in that situation doesn't have time to reorganize because he's surrounded with zombies and will die if he stops and organizes the inventory to make space (remember inventory in real-time, no pausing) - wouldn't it be nice for that to happen automatically? - Yes! (I believe this has been implemented in some games like Dungeon siege or something, so sure it's doable) take a look at this picture for example: Yes, so if you auto-sort the issue you will get your spaces but it's bad because: 1- Expensive: it doesn't need a whole sort operation to free those spaces, in the first picture, just slide the red item at the bottom to the very left, and you get the same spaces that you got from the auto-sort. 2- It's annoying to the player: "Who the F told you to re-order my stuff?" I'm not asking for "How to write the code" for this, I'm just asking for some guidance, where to look, what algorithms are involved? Is this something related to graphs and shortest path stuff? I hope not cuz I didn't manage to continue my college studies :/ But even if it is, just tell me and I will learn the stuff related. Notice there could be more than just one solution. So I guess the first thing I have to do is figure out if the situation is 'solvable' - if I know how to determine if a situation is solvable or not, then I can 'solve' it. I just need to know the conditions that makes it 'solvable'. And I believe there must be some algorithm/data structure for this. Here's a pic for more than one solution of trying to fit a 1x3 item: The arrows show just one of the solutions, but if you look you will find more than one. This is what I ultimately not auto-sorting but find a solution and applying it. Note that if I spend time on it I will come up with a way to solve it, but it wouldn't be the best way, it's like, holding a car wheel with your feet instead of your hands! XD Or just like trying to solve an issue that requires arrays, but you're not yet aware of their existence! So what is the right approach to this? Hope somebody helps, thanks a lot in advance :)

    Read the article

  • Android - big game universe

    - by user1641923
    I am new to an Android development, though I have much experience with Java, C++, PHP programming and a bit experience with vector graphics too (basic 3d Studio Max, Flash, etc). I am starting to work on an Android game. It is going to be a 2D space shooter/RPG, and I am not going to use any game engines and any 3D party libs. I really want to create a very large game universe, or even pseudo-infinite (without visible borders, as if it were a 2D projection of a sphere). It should include 10-12 clusters of 7-8 planets/other space objects and random amount of single asteroids/comets, which player can interact with and also not interactive background. I am looking for a least complicated aproach to create such a universe. My current ideas are: Simply create bitmaps with space scenery background so that they can be tiled seamlessly repeated and construct my 2D universe of this tiles, then place interactive objects (planets, other spaceships) on it. Using vector graphics. I would have a solid color background, some random background objects and gradients here and there. My problems here: Lack of knowledge of how well vector graphics is integrated in Android. Performance? Memory usage? Does Android manage big bitmaps well? Do all of the bitmaps have to be in memory during all game process? I am interested in technical details regarding each of the ideas and a suggestion, which I should go with.

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • IIS 7 SSO stops working during high CPU load? [migrated]

    - by DanB
    On our IIS7 site (Windows 2008 Server), we have set up single sign-on (SSO). It seems to work fine most of the time, but when the CPU load becomes high, SSO authentication completely stops working. I did some research and tried this suggestion to increase the max number of worker processes in the default app pool, but the increase did not help. Some details: The site is a WordPress blog. The server has plenty of RAM (2 GB) and free disk space. SSO is achieved by putting a copy of the WordPress login page (wp-login.php) into a subfolder below the root that has anonymous authentication disabled, and then redirecting the browser to it. This was the recommendation of Microsoft given to our consultants. To increase CPU load for testing, I have three scripts hit the home page simultaneously, over and over. This drives CPU to 100%. When these scripts are running, SSO authentication simply doesn't happen. As soon as I stop the scripts, SSO works again. (I should mention that the SSO problem also happens when many users visit the site at once....) The WordPress database process (mysqld) is not stressed at all by the scripts. I would be happy to provide further diagnostics. Any help appreciated!

    Read the article

  • Dynamically Changing the Display Names of Menus and Popups

    - by Geertjan
    Very interesting thing and handy to know when needed is the fact that "menuText" and "popupText" (from org.openide.awt.ActionRegistration) can be changed dynamically, via "putValue" as shown below for "popupText". The Action class, in this case, needs to be eager, hence you won't receive the object of interest via the constructor, but you can easily use the global Lookup for that purpose instead, as also shown below. import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectInformation; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.DemoProjectAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference(path = "Projects/Actions", position = 0) public final class DemoProjectAction extends AbstractAction{ private final ProjectInformation context; public DemoProjectAction() { putValue("popupText", "Select Me To See Current Time!"); context = ProjectUtils.getInformation( Utilities.actionsGlobalContext().lookup(Project.class)); } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { DateFormat formatter = new SimpleDateFormat("HH:mm:ss"); String formatted = formatter.format(System.currentTimeMillis()); putValue("popupText", "Time: " + formatted + " (" + context.getDisplayName() +")"); } } Now, let's do something semi useful and display, in the popup, which is available when you right-click a project, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Utilities; @ActionID( category = "Project", id = "org.ptt.TrackProjectTimerAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectTimerAction extends AbstractAction implements FileChangeListener { private final Project context; private Long startTime; private Long changedTime; private DateFormat formatter; public TrackProjectTimerAction() { putValue("popupText", "Enable project time tracker"); this.formatter = new SimpleDateFormat("HH:mm:ss"); context = Utilities.actionsGlobalContext().lookup(Project.class); context.getProjectDirectory().addRecursiveListener(this); } @Override public void actionPerformed(ActionEvent e) { startTimer(); } protected void startTimer() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} }

    Read the article

  • Design practice for securing data inside Azure SQL

    - by Sid
    Update: I'm looking for a specific design practice as we try to build-our-own database encryption. Azure SQL doesn't support many of the encryption features found in SQL Server (Table and Column encryption). We need to store some sensitive information that needs to be encrypted and we've rolled our own using AesCryptoServiceProvider to encrypt/decrypt data to/from the database. This solves the immediate issue (no cleartext in db) but poses other problems like Key rotation (we have to roll our own code for this, walking through the db converting old cipher text into new cipher text) metadata mapping of which tables and which columns are encrypted. This is simple when it's just couple of columns (send an email to all devs/document) but that quickly gets out of hand ... So, what is the best practice for doing application level encryption into a database that doesn't support encryption? In particular, what is a good design to solve the above two bullet points? If you had specific schema additions would love it if you could give details ("Have a NVARCHAR(max) column to store the cipher metadata as JSON" or a SQL script/commands). If someone would like to recommend a library, I'd be happy to stay away from "DIY" too. Before going too deep - I assume there isn't any way I can add encryption support to Azure by creating a stored procedure, right?

    Read the article

  • Laptop battery charging capacity reduced to 52%

    - by omjaijagdish
    I have been using Ubuntu 11.04 on DELL Inspiron 14R (N5010) laptop for last three months. Before I switch to ubuntu my laptop battery used to give 2.5 hrs to 3 hrs back-up. But since I have been using ubuntu, it has been reduced to 1hr to 1.5 hrs at max. I tried following commands: $ cat /proc/acpi/battery/BAT0/state which gave result as present: yes capacity state: ok charging state: charged present rate: 1 mA remaining capacity: 4400 mAh present voltage: 12407 mV then I tried $ acpi -b the result was.. Battery 0: Unknown, 100% when I gave command as $ upower -i /org/freedesktop/UPower/devices/battery_BAT0 the result was.. native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0C0A:00/power_supply/BAT0 model: DELL W7H3N08 serial: 7114 power supply: yes updated: Sat Nov 24 11:25:34 2012 (21 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: fully-charged energy: 48.4748 Wh energy-empty: 0 Wh energy-full: 48.4748 Wh energy-full-design: 48.9595 Wh energy-rate: 0.011017 W voltage: 12.408 V percentage: 100% **capacity: 52.9253%** technology: lithium-ion Someone please let me know, what is going wrong with my laptop? How can I get charging with full capacity?

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >