Search Results

Search found 4308 results on 173 pages for 'negative zero'.

Page 168/173 | < Previous Page | 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Demantra 7.3.1.3 Controlling MDP_MATRIX Combinations Assigned to Forecasting Tasks Using TargetTaskSize

    - by user702295
    New 7.3.1.3 parameter: TargetTaskSize Old parameter: BranchID  Multiple, deprecated  7.3.1.3 onwards Parameter Location: Parameters > System Parameters > Engine > Proport   Default: 0   Engine Mode: Both   Details: Specifies how many MDP_MATRIX combinations the analytical engine attempts to assign to each forecasting task.  Allocation will be affected by forecsat tree branch size.  TaskTargetSize is automcatically calculated.  It holds the perferred branch size, in number of combinations in the lowest level. This parameter is adjusted to a lower value for smaller schemas, depending on the number of available engines.   - As the forecast is generated the engine goes up the tree using max_fore_level and not top_level -1.  Max_fore_level has     to be less than or equal to top_level -1.  Due to this requirement, combinations falling under the same top level -1     member must be in the same task.  A member of the top level -1 of the forecast tree is known as a branch.  An engine     task is therefore comprised of one or more branches.     - Reveal current task size       go to Engine Administrator --> View --> Branch Information and run the application on your Demantra schema.  This will be deprecated in 7.3.1.3 since there is no longer a means of adjusting the brach size directly.  The focus is now on proper hierarchy / forecast design.     - Control of tasks       The number of tasks created is the lowest of number of branches, as defined by top level -1 members in forecast       tree, and engine sessions and the value of TargetTaskSize.  You are used to using the branch multiplier in this       calculation.  As of 7.3.1.3, the branch ID multiple is deprecated.     - Discovery of current branch size       To resolve this you must review the 2nd highest level in the forecast tree (below highest/highest) as this is the       level which determines the size of the branches.  If a few resulting tasks are too large it is recommended that       the forecast tree level driving branches be revised or at times completely removed from the forecast tree.     - Control of foreacast tree branch size         - Run the following sql to determine how even the branches are being split by the engine:             select count(*),branch_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by branch_id;             This will give you an understanding if some of the individual branches have an unusually large number of           rows and thus might indicate that the engine is not efficiently dividing up the parallel tasks.         - Based on the results of this sql, we may want to adjust the branch id multiplier and/or the number of engines           (both of these settings are found in the Engine Administrator)           select count(*), level_id from mdp_matrix where prediction_status = 1 and do_fore = 1 group by level_id;           This will give us an understanding at which level of the Forecast tree where the forecast is being generated.            Having a majority of combinations higher on the forecast tree might indicate either a poorly designed forecast           tree and/or engine parameters that are too strict           Based on the results of this we would adjust the Forecast Tree to see if choosing a different hierarchy might           produce a forecast, with more combinations, at a lower level.           For example:             - Review the 2nd highest level in the forecast tree, below highest/highest, as this is the level which               determines the size of the branches.             - If a few resulting tasks are too large it is recommended that the forecast tree level driving branches               be revised or at times completely removed from the forecast tree.               - For example, if the highest level of the forecast tree is set to Brand/All Locations.             - You have 10 brands but 2 of the brands account for 67% and 29% of all combinations.             - There is a distinct possibility that the tasks resulting from these 2 branches will be too large for               a single engine to process.  Some possible solutions could be to remove the Brand level and instead               use a different product grouping which has a more even distribution, possibly Product Group.               - It is also possible to add a location dimension to this forecast tree level, for example Customer.                This will also reduce forecast tree branch size and will deliver a balanced task allocation.             - A correctly configured Forecast Tree is something that is done by the Implementation team and is               not the responsibility of Oracle Support.  Allocation will be affected by forecast tree branch size.  When TargetTaskSize is set to 0, the default value, the system automatically calculates a value for 'TargetTaskSize' depending on the number of engines.   - QUESTION:  Does this mean that if TargetTaskSize is 1, we use tree branch size to allocate branches to tasks instead                of automatically calculating the size?     ANSWER: DEV Strongly recommends that the setting of TargetTaskSize remain at the DEFAULT of ZERO (0).   - How to control the number of engines?     Determine how many CPUs are on the machine(s) that is (are) running the engine.  As mentioned earlier, the general     rule is that you should designate 2 engines per each CPU that is available.  So for example, if you are running the     engine on a machine that has 4 CPU then you can have up to 8 engines designated in the Engine Administrator.  In this     type of architecture then instead of having one 'localhost' in your Engine Settings Screen, you would have 'localhost'     repeated eight times in this field.     Where do I set the number of engines?                 To add multiples computers where engine will run, please do a back-up of Settings.xml file under         Analytical Engines\bin\ folder, then edit it and add there the selected machines.                 Example, this will allow 3 engines to start:         - <Entry>           <Key argument="ComputerNames" />           <Value type="string" argument="localhost,localhost,localhost" />           </Entry Otherwise, if there are no additional engines defined, the calculated value of 'TargetTaskSize' is used. (Oracle does not recommend changing the default value.) The TargetTaskSize holds the engines prefered branch size, in number of level 1 combinations.   - Level 1 combinations, known as group size The engine manager will use this parameter to attempt creating branches with similar size.   * The engine manager will not create engines that do not have a branch. The engine divider algorithm uses the value of 'TargetTaskSize' as a system-preferred branch size to create branches that are more equal in size which improves engine performance.  The engine divider will try to add as many tasks as possible to an existing branch, up to the limit of 'TargetTaskSize' level 1 combinations, before adding new branches. Coming up next: - The engine divider - Group size - Level 1 combinations - MAX_FORE_LEVEL - Engine Parameters  

    Read the article

  • Clean Code Development & Flexible work environment - MSCC 26.10.2013

    Finally, some spare time to summarize my impressions and experiences of the recent meetup of Mauritius Software Craftsmanship Community. I already posted my comment on the event and on our social media networks: Professional - It's getting better with our meetups and I really appreciated that 'seniors' and 'juniors' were present today. Despite running a little bit out of time it was really great to see more students coming to the gathering. This time we changed location for our Saturday meetup and it worked out very well. A big thank you to Ebene Accelerator, namely Mrs Poonum, for the ability to use their meeting rooms for our community get-together. Already some weeks ago I had a very pleasant conversation with her about the MSCC aims, 'mission' and how we organise things. Additionally, I think that an environment like the Ebene Accelerator is a good choice as it acts as an incubator for young developers and start-ups. Reactions from other craftsmen Before I put my thoughts about our recent meeting down, I'd like to mention and cross-link to some of the other craftsmen that were present: "MSCC meet up is a massive knowledge gaining strategies for students, future entrepreneurs, or for geeks all around. Knowledge sharing becomes a fun. For those who have not been able to made it do subscribe on our MSCC meet up group at meetup.com." -- Nitin on Learning is fun with #MSCC #Ebene Accelerator "We then talked about the IT industry in Mauritius, salary issues in various field like system administration, software development etc. We analysed the reasons why people tend to hop from one company to another. That was a fun debate." -- Ish on MSCC meetup - Gang of Geeks "Flexible Learning Environment was quite interesting since these lines struck cords : "You're not a secretary....9 to 5 shouldn't suit you"....This allowed reflection...deep reflection....especially regarding the local mindset...which should be changed in a way which would promote creativity rather than choking it till death..." -- Yannick on 2nd MSCC Monthly Meet-up And others on Facebook... ;-) Visual impressions are available on our Meetup event page. More first time attendees We great pleasure I noticed that we have once again more first time visitors. A quick overlook showed that we had a majority of UoM students in first, second or last year. Some of them are already participating in the UoM Computer Club or are nominated as members of the Microsoft Student Partner (MSP) programme. Personally, I really appreciate the fact that the MSCC is able to gather such a broad audience. And as I wrote initially, the MSCC is technology-agnostic; we want IT people from any segment of this business. Of course, students which are about to delve into the 'real world' of working are highly welcome, and I hope that they might get one or other glimpse of experience or advice from employees. Sticking to the schedule? No, not really... And honestly, it was a good choice to go a little bit of the beaten tracks. I mean, yes we have a 'rough' agenda of topics that we would like to talk about or having a presentation about. But we keep it 'agile'. Due to the high number of new faces, we initiated another quick round of introductions and I gave a really brief overview of the MSCC. Next, we started to reflect on the Clean Code Developer (CCD) - Red Grade which we introduced on the last meetup. Nirvan was the lucky one and he did a good job on summarizing the various abbreviations of the first level of being a CCD. Actually, more interesting, we exchanged experience about the principles and practices of Red Grade, and it was very informative to get to know that Yann actually 'interviewed' a couple of friends, other students, local guys working in IT companies as well as some IT friends from India in order to counter-check on what he learned first-hand about Clean Code. Currently, he is reading the book of Robert C. Martin on that topic and I'm looking forward to his review soon. More output generates more input What seems to be like a personal mantra is working out pretty well for me since the beginning of this year. Being more active on social media networks, writing more article on my blog, starting the Mauritius Software Craftsmanship Community, and contributing more to other online communities has helped me to receive more project requests, job offers and possibilities to expand my business at IOS Indian Ocean Software Ltd. Actually, it is not a coincidence that one of the questions new craftsmen should answer during registration asks about having a personal blog. Whether you are just curious about IT, right in the middle of your Computer Studies, or already working in software development or system administration since a while you should consider to advertise and market yourself online. Easiest way to resolve this are to have online profiles on professional social media networks like LinkedIn, Xing, Twitter, and Google+ (no Facebook should be considered for private only), and considering to have a personal blog. Why? -- Be yourself, be proud of your work, and let other people know that you're passionate about your profession. Trust me, this is going to open up opportunities you might not have dreamt about... Exchanging ideas about having a professional online presence - MSCC meetup on the 26th October 2013 Furthermore, consider to put your Curriculum Vitae online, too. There are quite a number of service providers like 1ClickCV, Stack Overflow Careers 2.0, etc. which give you the ability to have an up to date CV online. At least put it on your site, next to your personal blog. Similar to what you would be able to see on my site here. Cyber Island Mauritius - are we there? A couple of weeks ago I got a 'cold' message on LinkedIn from someone living in the U.S. asking about the circumstances and conditions of the IT world of Mauritius. He has a great business idea, venture capital and is currently looking for a team of software developers (mainly mobile - iOS) for a new startup here in Mauritius. Since then we exchanged quite some details through private messages and Skype conversations, and I suggested that it might be a good chance to join our meetup through a conference call and see for yourself about potential candidates. During approximately 30 to 40 minutes the brief idea of the new startup was presented - very promising state-of-the-art technology aspects and integration of various public APIs -, and we had a good Q&A session about it. Also thanks to the excellent bandwidth provided by the Ebene Accelerator the video conference between three parties went absolutely well. Clean Code Developer - Orange Grade Hahaha - nice one... Being at the Orange Tower at Ebene and then talking about an Orange Grade as CCD. Well, once again I provided an overview of the principles and practices in that rank of Clean Code, and similar to our last meetup we discussed on the various aspect of each principle, whether someone already got in touch with it during studies or work, and how it could affect their future view on their source code. Following are the principles and practices of Clean Code Developer - Orange Grade: CCD Orange Grade - Principles Single Level of Abstraction (SLA) Single Responsibility Principle (SRP) Separation of Concerns (SoC) Source Code conventions CCD Orange Grade - Practices Issue Tracking Automated Integration Tests Reading, Reading, Reading Reviews Especially the part on reading technical books got some extra attention. We quickly gathered our views on that and came up with a result that ranges between Zero (0) and up to Fifteen (15) book titles per year. Personally, I'm keeping my progress between Six (6) and Eight (8) titles per year, but at least One (1) per quarter of a year. Which is also connected to the fact that I'm participating in the O'Reilly Reader Review Program and have a another benefit to get access to free books only by writing and publishing a review afterwards. We also had a good exchange on the extended topic of 'Reviews' - which to my opinion is abnormal difficult here in Mauritius for various reasons. As far as I can tell from my experience working with Mauritian software developers, either as colleagues, employees or during consulting services there are unfortunately two dominant pattern on that topic: Keeping quiet Running away Honestly, I have no evidence about why these are the two 'solutions' on reviews but that's the situation that I had to face over the last couple of years. Sitting together and talking about problematic issues, tackling down root causes of de-motivational activities and working on general improvements doesn't seem to have a ground within the IT world of Mauritius. Are you a typist or a creative software craftsman? - MSCC meetup on the 26th October 2013 One very good example that we talked about was the fact of 'job hoppers' as you can easily observe it on someone's CV - those people change job every single year; for no obvious reason! Frankly speaking, I wouldn't even consider an IT person like to for an interview. As a company you're investing money and effort into the abilities of your employees. Hiring someone that won't stay for a longer period is out of question. And sorry to say, these kind of IT guys smell fishy about their capabilities and more likely to cause problems than actually produce productive results. One of the reasons why there is a probation period on an employment contract is to give you the liberty to leave as early as possible in case that you don't like your new position. Don't fool yourself or waste other people's time and money by hanging around a full year only to snatch off the bonus payment... Future outlook: Developer's Conference Even though it is not official yet I already mentioned it several times during our weekly Code & Coffee sessions. The MSCC is looking forward to be able to organise or to contribute to an upcoming IT event. Currently, the rough schedule is set for April 2014 but this mainly depends on availability of location(s), a decent time frame for preparations, and the underlying procedures with public bodies to have it approved and so on. As soon as the information about date and location has been fixed there will be a 'Call for Papers' period in order to attract local IT enthusiasts to apply for a session slot and talk about their field of work and their passion in IT. More to come for sure... My resume of the day It was a great gathering and I am very pleased about the fact that we had another 15 craftsmen (plus 2 businessmen on conference call plus 2 young apprentices) in the same room, talking about IT related topics and sharing their experience as employees and students. Personally, I really appreciated the feedback from the students about their current view on their future career, and I really hope that some of them are going to pursue their dreams. Start promoting yourself and it will happen... Looking forward to your blogs! And last but not least our numbers on Meetup and Facebook have been increased as a direct consequence of this meetup. Please, spread the word about the MSCC and get your friends and colleagues to join our official site. The higher the number of craftsmen we have the better chances we have t achieve something great! Thanks!

    Read the article

  • Taming Hopping Windows

    - by Roman Schindlauer
    At first glance, hopping windows seem fairly innocuous and obvious. They organize events into windows with a simple periodic definition: the windows have some duration d (e.g. a window covers 5 second time intervals), an interval or period p (e.g. a new window starts every 2 seconds) and an alignment a (e.g. one of those windows starts at 12:00 PM on March 15, 2012 UTC). var wins = xs     .HoppingWindow(TimeSpan.FromSeconds(5),                    TimeSpan.FromSeconds(2),                    new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc)); Logically, there is a window with start time a + np and end time a + np + d for every integer n. That’s a lot of windows. So why doesn’t the following query (always) blow up? var query = wins.Select(win => win.Count()); A few users have asked why StreamInsight doesn’t produce output for empty windows. Primarily it’s because there is an infinite number of empty windows! (Actually, StreamInsight uses DateTimeOffset.MaxValue to approximate “the end of time” and DateTimeOffset.MinValue to approximate “the beginning of time”, so the number of windows is lower in practice.) That was the good news. Now the bad news. Events also have duration. Consider the following simple input: var xs = this.Application                 .DefineEnumerable(() => new[]                     { EdgeEvent.CreateStart(DateTimeOffset.UtcNow, 0) })                 .ToStreamable(AdvanceTimeSettings.IncreasingStartTime); Because the event has no explicit end edge, it lasts until the end of time. So there are lots of non-empty windows if we apply a hopping window to that single event! For this reason, we need to be careful with hopping window queries in StreamInsight. Or we can switch to a custom implementation of hopping windows that doesn’t suffer from this shortcoming. The alternate window implementation produces output only when the input changes. We start by breaking up the timeline into non-overlapping intervals assigned to each window. In figure 1, six hopping windows (“Windows”) are assigned to six intervals (“Assignments”) in the timeline. Next we take input events (“Events”) and alter their lifetimes (“Altered Events”) so that they cover the intervals of the windows they intersect. In figure 1, you can see that the first event e1 intersects windows w1 and w2 so it is adjusted to cover assignments a1 and a2. Finally, we can use snapshot windows (“Snapshots”) to produce output for the hopping windows. Notice however that instead of having six windows generating output, we have only four. The first and second snapshots correspond to the first and second hopping windows. The remaining snapshots however cover two hopping windows each! While in this example we saved only two events, the savings can be more significant when the ratio of event duration to window duration is higher. Figure 1: Timeline The implementation of this strategy is straightforward. We need to set the start times of events to the start time of the interval assigned to the earliest window including the start time. Similarly, we need to modify the end times of events to the end time of the interval assigned to the latest window including the end time. The following snap-to-boundary function that rounds a timestamp value t down to the nearest value t' <= t such that t' is a + np for some integer n will be useful. For convenience, we will represent both DateTime and TimeSpan values using long ticks: static long SnapToBoundary(long t, long a, long p) {     return t - ((t - a) % p) - (t > a ? 0L : p); } How do we find the earliest window including the start time for an event? It’s the window following the last window that does not include the start time assuming that there are no gaps in the windows (i.e. duration < interval), and limitation of this solution. To find the end time of that antecedent window, we need to know the alignment of window ends: long e = a + (d % p); Using the window end alignment, we are finally ready to describe the start time selector: static long AdjustStartTime(long t, long e, long p) {     return SnapToBoundary(t, e, p) + p; } To find the latest window including the end time for an event, we look for the last window start time (non-inclusive): public static long AdjustEndTime(long t, long a, long d, long p) {     return SnapToBoundary(t - 1, a, p) + p + d; } Bringing it together, we can define the translation from events to ‘altered events’ as in Figure 1: public static IQStreamable<T> SnapToWindowIntervals<T>(IQStreamable<T> source, TimeSpan duration, TimeSpan interval, DateTime alignment) {     if (source == null) throw new ArgumentNullException("source");     // reason about DateTime and TimeSpan in ticks     long d = Math.Min(DateTime.MaxValue.Ticks, duration.Ticks);     long p = Math.Min(DateTime.MaxValue.Ticks, Math.Abs(interval.Ticks));     // set alignment to earliest possible window     var a = alignment.ToUniversalTime().Ticks % p;     // verify constraints of this solution     if (d <= 0L) { throw new ArgumentOutOfRangeException("duration"); }     if (p == 0L || p > d) { throw new ArgumentOutOfRangeException("interval"); }     // find the alignment of window ends     long e = a + (d % p);     return source.AlterEventLifetime(         evt => ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p)),         evt => ToDateTime(AdjustEndTime(evt.EndTime.ToUniversalTime().Ticks, a, d, p)) -             ToDateTime(AdjustStartTime(evt.StartTime.ToUniversalTime().Ticks, e, p))); } public static DateTime ToDateTime(long ticks) {     // just snap to min or max value rather than under/overflowing     return ticks < DateTime.MinValue.Ticks         ? new DateTime(DateTime.MinValue.Ticks, DateTimeKind.Utc)         : ticks > DateTime.MaxValue.Ticks         ? new DateTime(DateTime.MaxValue.Ticks, DateTimeKind.Utc)         : new DateTime(ticks, DateTimeKind.Utc); } Finally, we can describe our custom hopping window operator: public static IQWindowedStreamable<T> HoppingWindow2<T>(     IQStreamable<T> source,     TimeSpan duration,     TimeSpan interval,     DateTime alignment) {     if (source == null) { throw new ArgumentNullException("source"); }     return SnapToWindowIntervals(source, duration, interval, alignment).SnapshotWindow(); } By switching from HoppingWindow to HoppingWindow2 in the following example, the query returns quickly rather than gobbling resources and ultimately failing! public void Main() {     var start = new DateTimeOffset(new DateTime(2012, 6, 28), TimeSpan.Zero);     var duration = TimeSpan.FromSeconds(5);     var interval = TimeSpan.FromSeconds(2);     var alignment = new DateTime(2012, 3, 15, 12, 0, 0, DateTimeKind.Utc);     var events = this.Application.DefineEnumerable(() => new[]     {         EdgeEvent.CreateStart(start.AddSeconds(0), "e0"),         EdgeEvent.CreateStart(start.AddSeconds(1), "e1"),         EdgeEvent.CreateEnd(start.AddSeconds(1), start.AddSeconds(2), "e1"),         EdgeEvent.CreateStart(start.AddSeconds(3), "e2"),         EdgeEvent.CreateStart(start.AddSeconds(9), "e3"),         EdgeEvent.CreateEnd(start.AddSeconds(3), start.AddSeconds(10), "e2"),         EdgeEvent.CreateEnd(start.AddSeconds(9), start.AddSeconds(10), "e3"),     }).ToStreamable(AdvanceTimeSettings.IncreasingStartTime);     var adjustedEvents = SnapToWindowIntervals(events, duration, interval, alignment);     var query = from win in HoppingWindow2(events, duration, interval, alignment)                 select win.Count();     DisplayResults(adjustedEvents, "Adjusted Events");     DisplayResults(query, "Query"); } As you can see, instead of producing a massive number of windows for the open start edge e0, a single window is emitted from 12:00:15 AM until the end of time: Adjusted Events StartTime EndTime Payload 6/28/2012 12:00:01 AM 12/31/9999 11:59:59 PM e0 6/28/2012 12:00:03 AM 6/28/2012 12:00:07 AM e1 6/28/2012 12:00:05 AM 6/28/2012 12:00:15 AM e2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM e3 Query StartTime EndTime Payload 6/28/2012 12:00:01 AM 6/28/2012 12:00:03 AM 1 6/28/2012 12:00:03 AM 6/28/2012 12:00:05 AM 2 6/28/2012 12:00:05 AM 6/28/2012 12:00:07 AM 3 6/28/2012 12:00:07 AM 6/28/2012 12:00:11 AM 2 6/28/2012 12:00:11 AM 6/28/2012 12:00:15 AM 3 6/28/2012 12:00:15 AM 12/31/9999 11:59:59 PM 1 Regards, The StreamInsight Team

    Read the article

  • XSL testing empty strings with <xsl:if> and sorting

    - by AdRock
    I am having trouble with a template that has to check 3 different nodes and if they are not empty, print the data I am using for each node then doing the output but it is not printing anything. It is like the test returns zero. I have selected the parent node of each node i want to check the length on as the template match but it still doesn't work. Another thing, how do i sort the list using . I tried using this but i get an error about loading the stylesheet. If i take out the sort it works <xsl:template match="folktask/member"> <xsl:if test="user/account/userlevel='3'"> <xsl:sort select="festival/event/datefrom"/> <div class="userdiv"> <xsl:apply-templates select="user"/> <xsl:apply-templates select="festival"/> </div> </xsl:if> </xsl:template> <xsl:template match="festival"> <xsl:apply-templates select="contact"/> </xsl:template> This should hopefully finish all my stylesheets. This is the template I am calling <xsl:template match="contact"> <xsl:if test="string-length(contelephone)!=0"> <div class="small bold">TELEPHONE:</div> <div class="large"> <xsl:value-of select="contelephone/." /> </div> </xsl:if> <xsl:if test="string-length(conmobile)!=0"> <div class="small bold">MOBILE:</div> <div class="large"> <xsl:value-of select="conmobile/." /> </div> </xsl:if> <xsl:if test="string-length(fax)!=0"> <div class="small bold">FAX:</div> <div class="large"> <xsl:value-of select="fax/." /> </div> </xsl:if> </xsl:template> and a section of my xml. If you need me to edit my post so you can see the full code i will but the rest works fine. <folktask> <member> <user id="4"> <personal> <name>Connor Lawson</name> <sex>Male</sex> <address1>12 Ash Way</address1> <address2></address2> <city>Swindon</city> <county>Wiltshire</county> <postcode>SN3 6GS</postcode> <telephone>01791928119</telephone> <mobile>07338695664</mobile> <email>[email protected]</email> </personal> <account> <username>iTuneStinker</username> <password>3a1f5fda21a07bfff20c41272bae7192</password> <userlevel>3</userlevel> <signupdate>2010-03-26T09:23:50</signupdate> </account> </user> <festival id="1"> <event> <eventname>Oxford Folk Festival</eventname> <url>http://www.oxfordfolkfestival.com/</url> <datefrom>2010-04-07</datefrom> <dateto>2010-04-09</dateto> <location>Oxford</location> <eventpostcode>OX1 9BE</eventpostcode> <coords> <lat>51.735640</lat> <lng>-1.276136</lng> </coords> </event> <contact> <conname>Stuart Vincent</conname> <conaddress1>P.O. Box 642</conaddress1> <conaddress2></conaddress2> <concity>Oxford</concity> <concounty>Bedfordshire</concounty> <conpostcode>OX1 3BY</conpostcode> <contelephone>01865 79073</contelephone> <conmobile></conmobile> <fax></fax> <conemail>[email protected]</conemail> </contact> </festival> </member> </folktask>

    Read the article

  • How to export SSIS to Microsoft Excel without additional software?

    - by Dr. Zim
    This question is long winded because I have been updating the question over a very long time trying to get SSIS to properly export Excel data. I managed to solve this issue, although not correctly. Aside from someone providing a correct answer, the solution listed in this question is not terrible. The only answer I found was to create a single row named range wide enough for my columns. In the named range put sample data and hide it. SSIS appends the data and reads metadata from the single row (that is close enough for it to drop stuff in it). The data takes the format of the hidden single row. This allows headers, etc. WOW what a pain in the butt. It will take over 450 days of exports to recover the time lost. However, I still love SSIS and will continue to use it because it is still way better than Filemaker LOL. My next attempt will be doing the same thing in the report server. Original question notes: If you are in Sql Server Integrations Services designer and want to export data to an Excel file starting on something other than the first line, lets say the forth line, how do you specify this? I tried going in to the Excel Destination of the Data Flow, changed the AccessMode to OpenRowSet from Variable, then set the variable to "YPlatters$A4:I20000" This fails saying it cannot find the sheet. The sheet is called YPlatters. I thought you could specify (Sheet$)(Starting Cell):(Ending Cell)? Update Apparently in Excel you can select a set of cells and name them with the name box. This allows you to select the name instead of the sheet without the $ dollar sign. Oddly enough, whatever the range you specify, it appends the data to the next row after the range. Oddly, as you add data, it increases the named selection's row count. Another odd thing is the data takes the format of the last line of the range specified. My header rows are bold. If I specify a range that ends with the header row, the data appends to the row below, and makes all the entries bold. if you specify one row lower, it puts a blank line between the header row and the data, but the data is not bold. Another update No matter what I try, SSIS samples the "first row" of the file and sets the metadata according to what it finds. However, if you have sample data that has a value of zero but is formatted as the first row, it treats that column as text and inserts numeric values with a single quote in front ('123.34). I also tried headers that do not reflect the data types of the columns. I tried changing the metadata of the Excel destination, but it always changes it back when I run the project, then fails saying it will truncate data. If I tell it to ignore errors, it imports everything except that column. Several days of several hours a piece later... Another update I tried every combination. A mostly working example is to create the named range starting with the column headers. Format your column headers as you want the data to look as the data takes on this format. In my example, these exist from A4 to E4, which is my defined range. SSIS appends to the row after the defined range, so defining A4 to E68 appends the rows starting at A69. You define the Connection as having the first row contains the field names. It takes on the metadata of the header row, oddly, not the second row, and it guesses at the data type, not the formatted data type of the column, i.e., headers are text, so all my metadata is text. If your headers are bold, so is all of your data. I even tried making a sample data row without success... I don't think anyone actually uses Excel with the default MS SSIS export. If you could define the "insert range" (A5 to E5) with no header row and format those columns (currency, not bold, etc.) without it skipping a row in Excel, this would be very helpful. From what I gather, noone uses SSIS to export Excel without a third party connection manager. Any ideas on how to set this up properly so that data is formatted correctly, i.e., the metadata read from Excel is proper to the real data, and formatting inherits from the first row of data, not the headers in Excel? One last update (July 17, 2009) I got this to work very well. One thing I added to Excel was the IMEX=1 in the Excel connection string: "Excel 8.0;HDR=Yes;IMEX=1". This forces Excel (I think) to look at all rows to see what kind of data is in it. Generally, this does not drop information, say for instance if you have a zip code then about 9 rows down you have a zip+4, Excel without this blanks that field entirely without error. With IMEX=1, it recognizes that Zip is actually a character field instead of numeric. And of course, one more update (August 27, 2009) The IMEX=1 will succeed importing data with missing contents in the first 8 rows, but it will fail exporting data where no data exists. So, have it on your import connection string, but not your export Excel connection string. I have to say, after so much fiddling, it works pretty well.

    Read the article

  • What is the best Battleship AI?

    - by John Gietzen
    Battleship! Back in 2003, (when I was 17,) I competed in a Battleship AI coding competition. Even though I lost that tournament, I had a lot of fun and learned a lot from it. Now, I would like to resurrect this competition, in the search of the best battleship AI. Here is the framework: Battleship.zip The winner will be awarded +450 reputation! The competition will be held starting on the 17th of November, 2009. No entries or edits later than zero-hour on the 17th will be accepted. (Central Standard Time) Submit your entries early, so you don't miss your opportunity! To keep this OBJECTIVE, please follow the spirit of the competition. Rules of the game: The game is be played on a 10x10 grid. Each competitor will place each of 5 ships (of lengths 2, 3, 3, 4, 5) on their grid. No ships may overlap, but they may be adjacent. The competitors then take turns firing single shots at their opponent. A variation on the game allows firing multiple shots per volley, one for each surviving ship. The opponent will notify the competitor if the shot sinks, hits, or misses. Game play ends when all of the ships of any one player are sunk. Rules of the competition: The spirit of the competition is to find the best Battleship algorithm. Anything that is deemed against the spirit of the competition will be grounds for disqualification. Interfering with an opponent is against the spirit of the competition. Multithreading may be used under the following restrictions: No more than one thread may be running while it is not your turn. (Though, any number of threads may be in a "Suspended" state). No thread may run at a priority other than "Normal". Given the above two restrictions, you will be guaranteed at least 3 dedicated CPU cores during your turn. A limit of 1 second of CPU time per game is allotted to each competitor on the primary thread. Running out of time results in losing the current game. Any unhandled exception will result in losing the current game. Network access and disk access is allowed, but you may find the time restrictions fairly prohibitive. However, a few set-up and tear-down methods have been added to alleviate the time strain. Code should be posted on stack overflow as an answer, or, if too large, linked. Max total size (un-compressed) of an entry is 1 MB. Officially, .Net 2.0 / 3.5 is the only framework requirement. Your entry must implement the IBattleshipOpponent interface. Scoring: Best 51 games out of 101 games is the winner of a match. All competitors will play matched against each other, round-robin style. The best half of the competitors will then play a double-elimination tournament to determine the winner. (Smallest power of two that is greater than or equal to half, actually.) I will be using the TournamentApi framework for the tournament. The results will be posted here. If you submit more than one entry, only your best-scoring entry is eligible for the double-elim. Good luck! Have fun! EDIT 1: Thanks to Freed, who has found an error in the Ship.IsValid function. It has been fixed. Please download the updated version of the framework. EDIT 2: Since there has been significant interest in persisting stats to disk and such, I have added a few non-timed set-up and tear-down events that should provide the required functionality. This is a semi-breaking change. That is to say: the interface has been modified to add functions, but no body is required for them. Please download the updated version of the framework. EDIT 3: Bug Fix 1: GameWon and GameLost were only getting called in the case of a time out. Bug Fix 2: If an engine was timing out every game, the competition would never end. Please download the updated version of the framework. EDIT 4: Results!

    Read the article

  • trying to override getView in a SimpleCursorAdapter gives NullPointerException

    - by Dimitry Hristov
    Would very much appreciate any help or hint on were to go next. I'm trying to change the content of a row in ListView programmatically. In one row there are 3 TextView and a ProgressBar. I want to animate the ProgressBar if the 'result' column of the current row is zero. After reading some tutorials and docs, I came to the conclusion that LayoutInflater has to be used and getView() - overriden. Maybe I am wrong on this. If I return row = inflater.inflate(R.layout.row, null); from the function, it gives NullPointerException. Here is the code: private final class mySimpleCursorAdapter extends SimpleCursorAdapter { private Cursor localCursor; private Context localContext; public mySimpleCursorAdapter(Context context, int layout, Cursor c, String[] from, int[] to) { super(context, layout, c, from, to); this.localCursor = c; this.localContext = context; } /** * 1. ListView asks adapter "give me a view" (getView) for each item of the list * 2. A new View is returned and displayed */ public View getView(int position, View convertView, ViewGroup parent) { View row = super.getView(position, convertView, parent); LayoutInflater inflater = (LayoutInflater)localContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE); String result = localCursor.getString(2); int resInt = Integer.parseInt(result); Log.d(TAG, "row " + row); // if 'result' column form the TABLE is 0, do something useful: if(resInt == 0) { ProgressBar progress = (ProgressBar) row.findViewById(R.id.update_progress); progress.setIndeterminate(true); TextView edit1 = (TextView)row.findViewById(R.id.row_id); TextView edit2 = (TextView)row.findViewById(R.id.request); TextView edit3 = (TextView)row.findViewById(R.id.result); edit1.setText("1"); edit2.setText("2"); edit3.setText("3"); row = inflater.inflate(R.layout.row, null); } return row; } here is the Stack Trace: 03-08 03:15:29.639: ERROR/AndroidRuntime(619): java.lang.NullPointerException 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.SimpleCursorAdapter.bindView(SimpleCursorAdapter.java:149) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.CursorAdapter.getView(CursorAdapter.java:186) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.dhristov.test1.test1$mySimpleCursorAdapter.getView(test1.java:105) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.AbsListView.obtainView(AbsListView.java:1256) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.makeAndAddView(ListView.java:1668) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.fillDown(ListView.java:637) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.fillSpecific(ListView.java:1224) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.ListView.layoutChildren(ListView.java:1499) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.AbsListView.onLayout(AbsListView.java:1113) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.setChildFrame(LinearLayout.java:1119) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.layoutVertical(LinearLayout.java:998) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.LinearLayout.onLayout(LinearLayout.java:918) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.widget.FrameLayout.onLayout(FrameLayout.java:333) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.View.layout(View.java:6830) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.ViewRoot.performTraversals(ViewRoot.java:996) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.view.ViewRoot.handleMessage(ViewRoot.java:1633) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.os.Handler.dispatchMessage(Handler.java:99) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.os.Looper.loop(Looper.java:123) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at android.app.ActivityThread.main(ActivityThread.java:4363) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at java.lang.reflect.Method.invokeNative(Native Method) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at java.lang.reflect.Method.invoke(Method.java:521) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:860) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:618) 03-08 03:15:29.639: ERROR/AndroidRuntime(619): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • How to fix flicker when using Webkit transforms & transitions

    - by gargantaun
    I have a very simple demo working that uses Webkit transforms and transitions for smooth horizontal scrolling between 'panels' (divs). The reason I want to go this route as opposed to a Javascript driven system is that it's for the iPad and Javascript performance is quite poor, but the css transforms and transitions are smooth as silk. Sadly though, I'm getting a lot of flicker on the iPad with my Demo. You can see the demo here You'll need safari or and iPad to see it in action. I've never seen this happening in any of the demos for transforms and transitions so I'm hopeful that this is fixable. Anyway here's the code that powers the thing.... The HTML looks like this. <html> <head> <title>Swipe Demo</title> <link href="test.css" rel="stylesheet" /> <link href="styles.css" rel="stylesheet" /> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="functions.js"></script> <script type="text/javascript" src="swiping.js"></script> </head> <body> <div id="wrapper"> <div class='panel one'> <h1>This is panel 1</h1> </div> <div class='panel two'> <h1>This is panel 2</h1> </div> <div class='panel three'> <h1>This is panel 3</h1> </div> <div class='panel four'> <h1>This is panel 4</h1> </div> </div> </body> </html> The CSS looks like this body, html { padding: 0; margin: 0; background: #000; } #wrapper { width: 10000px; -webkit-transform: translateX(0px); } .panel { width: 1024px; height: 300px; background: #fff; display: block; float: left; position: relative; } and the javascript looks like this // Mouse / iPad Touch var touchSupport = (typeof Touch == "object"), touchstart = touchSupport ? 'touchstart' : 'mousedown', touchmove = touchSupport ? 'touchmove' : 'mousemove', touchend = touchSupport ? 'touchend' : 'mouseup'; $(document).ready(function(){ // set top and left to zero $("#wrapper").css("top", 0); $("#wrapper").css("left", 0); // get total number of panels var panelTotal; $(".panel").each(function(){ panelTotal += 1 }); // Touch Start // ------------------------------------------------------------------------------------------ var touchStartX; var touchStartY; var currentX; var currentY; var shouldMove = false; document.addEventListener(touchstart, swipeStart, false); function swipeStart(event){ touch = realEventType(event); touchStartX = touch.pageX; touchStartY = touch.pageY; var pos = $("#wrapper").position(); currentX = parseInt(pos.left); currentY = parseInt(pos.top); shouldMove = true; } // Touch Move // ------------------------------------------------------------------------------------------ var touchMoveX; var touchMoveY; var distanceX; var distanceY; document.addEventListener(touchmove, swipeMove, false); function swipeMove(event){ if(shouldMove){ touch = realEventType(event); event.preventDefault(); touchMoveX = touch.pageX; touchMoveY = touch.pageY; distanceX = touchMoveX - touchStartX; distanceY = touchMoveY - touchStartY; movePanels(distanceX); } } function movePanels(distance){ newX = currentX + (distance/4); $("#wrapper").css("left", newX); } // Touch End // ------------------------------------------------------------------------------------------ var cutOff = 100; var panelIndex = 0; document.addEventListener(touchend, swipeEnd, false); function swipeEnd(event){ touch = (touchSupport) ? event.changedTouches[0] : event; var touchEndX = touch.pageX; var touchEndY = touch.pageY; updatePanelIndex(distanceX); gotToPanel(); shouldMove = false; } // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- function updatePanelIndex(distance){ if(distanceX > cutOff) panelIndex -= 1; if(distanceX < (cutOff * -1)){ panelIndex += 1; } if(panelIndex < 0){ panelIndex = 0; } if(panelIndex >= panelTotal) panelIndex = panelTotal -1; console.log(panelIndex); } // -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- function gotToPanel(){ var panelPos = getTotalWidthOfElement($(".panel")) * panelIndex * -1; $("#wrapper").css("-webkit-transition-property", "translateX"); $("#wrapper").css("-webkit-transition-duration", "1s"); $("#wrapper").css("-webkit-transform", "translateX("+panelPos+"px)"); } }); function realEventType(event){ e = (touchSupport) ? event.targetTouches[0] : event; return e; }

    Read the article

  • Flex actionscript extending DateChooser, events in calendar

    - by Nemi
    ExtendedDateChooser class is great solution for simple event calendar used in my flex project. You can find it if google for "Adding-Calendar-Event-Entries-to-the-Flex-DateChooser-Component" with a link of updated solution in comments of the post. I posted files below. Problem in that calendar is text events are missing when month is changed. Is there updateCompleted event in Actionscript just like in dateChooser flex component? Like in: <mx:DateChooser id="dc" updateCompleted="goThroughDateChooserCalendarLayoutAndSetEventsInCalendarAgain()"</mx> When scroll event is added, which is available in Actionscript, it gets dispatched but after updateDisplayList() is fired, so didn't manage to answer, why are calendar events erased? Any suggestions, what to add in code, maybe override some function? ExtendedDateChooserClass.mxml <?xml version='1.0' encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" xmlns:mycomp="cyberslingers.controls.*" layout="absolute" creationComplete="init()"> <mx:Script> <![CDATA[ import cyberslingers.controls.ExtendedDateChooser; import mx.rpc.events.ResultEvent; import mx.rpc.events.FaultEvent; import mx.controls.Alert; public var mycal:ExtendedDateChooser = new ExtendedDateChooser(); // collection to hold date, data and label [Bindable] public var dateCollection:XMLList = new XMLList(); private function init():void { eventList.send(); } private function readCollection(event:ResultEvent):void { dateCollection = event.result.calendarevent; //Position and size the calendar mycal.width = 400; mycal.height = 400; //Add the data from the XML file to the calendar mycal.dateCollection = dateCollection; //Add the calendar to the canvas this.addChild(mycal); } private function readFaultHandler(event:FaultEvent):void { Alert.show(event.fault.message, "Could not load data"); } ]]> </mx:Script> <mx:HTTPService id="eventList" url="data.xml" resultFormat="e4x" result="readCollection(event);" fault="readFaultHandler(event);"/> </mx:Application> ExtendedDateChooser.as package cyberslingers.controls { import flash.events.Event; import flash.events.TextEvent; import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.controls.CalendarLayout; import mx.controls.DateChooser; import mx.core.UITextField; import mx.events.FlexEvent; public class ExtendedDateChooser extends DateChooser { public function ExtendedDateChooser() { super(); this.addEventListener(TextEvent.LINK, linkHandler); this.addEventListener(FlexEvent.CREATION_COMPLETE, addEvents); } //datasource public var dateCollection:XMLList = new XMLList(); //-------------------------------------- // Add events //-------------------------------------- /** * Loop through calendar control and add event links * @param e */ private function addEvents(e:Event):void { // loop through all the calendar children for(var i:uint = 0; i < this.numChildren; i++) { var calendarObj:Object = this.getChildAt(i); // find the CalendarLayout object if(calendarObj.hasOwnProperty("className")) { if(calendarObj.className == "CalendarLayout") { var cal:CalendarLayout = CalendarLayout(calendarObj); // loop through all the CalendarLayout children for(var j:uint = 0; j < cal.numChildren; j++) { var dateLabel:Object = cal.getChildAt(j); // find all UITextFields if(dateLabel.hasOwnProperty("text")) { var day:UITextField = UITextField(dateLabel); var dayHTML:String = day.text; day.selectable = true; day.wordWrap = true; day.multiline = true; day.styleName = "EventLabel"; //TODO: passing date as string is not ideal, tough to validate //Make sure to add one to month since it is zero based var eventArray:Array = dateHelper((this.displayedMonth+1) + "/" + dateLabel.text + "/" + this.displayedYear); if(eventArray.length > 0) { for(var k:uint = 0; k < eventArray.length; k++) { dayHTML += "<br><A HREF='event:" + eventArray[k].data + "' TARGET=''>" + eventArray[k].label + "</A>"; } day.htmlText = dayHTML; } } } } } } } //-------------------------------------- // Events //-------------------------------------- /** * Handle clicking text link * @param e */ private function linkHandler(event:TextEvent):void { // What do we want to do when user clicks an entry? Alert.show("selected: " + event.text); } //-------------------------------------- // Helpers //-------------------------------------- /** * Build array of events for current date * @param string - current date * */ private function dateHelper(renderedDate:String):Array { var result:Array = new Array(); for(var i:uint = 0; i < dateCollection.length(); i++) { if(dateCollection[i].date == renderedDate) { result.push(dateCollection[i]); } } return result; } } } data.xml <?xml version="1.0" encoding="utf-8"?> <rss> <calendarevent> <date>8/22/2009</date> <data>This is a test 1</data> <label>Stephens Test 1</label> </calendarevent> <calendarevent> <date>8/23/2009</date> <data>This is a test 2</data> <label>Stephens Test 2</label> </calendarevent> </rss>

    Read the article

  • Getting template metaprogramming compile-time constants at runtime

    - by GMan - Save the Unicorns
    Background Consider the following: template <unsigned N> struct Fibonacci { enum { value = Fibonacci<N-1>::value + Fibonacci<N-2>::value }; }; template <> struct Fibonacci<1> { enum { value = 1 }; }; template <> struct Fibonacci<0> { enum { value = 0 }; }; This is a common example and we can get the value of a Fibonacci number as a compile-time constant: int main(void) { std::cout << "Fibonacci(15) = "; std::cout << Fibonacci<15>::value; std::cout << std::endl; } But you obviously cannot get the value at runtime: int main(void) { std::srand(static_cast<unsigned>(std::time(0))); // ensure the table exists up to a certain size // (even though the rest of the code won't work) static const unsigned fibbMax = 20; Fibonacci<fibbMax>::value; // get index into sequence unsigned fibb = std::rand() % fibbMax; std::cout << "Fibonacci(" << fibb << ") = "; std::cout << Fibonacci<fibb>::value; std::cout << std::endl; } Because fibb is not a compile-time constant. Question So my question is: What is the best way to peek into this table at run-time? The most obvious solution (and "solution" should be taken lightly), is to have a large switch statement: unsigned fibonacci(unsigned index) { switch (index) { case 0: return Fibonacci<0>::value; case 1: return Fibonacci<1>::value; case 2: return Fibonacci<2>::value; . . . case 20: return Fibonacci<20>::value; default: return fibonacci(index - 1) + fibonacci(index - 2); } } int main(void) { std::srand(static_cast<unsigned>(std::time(0))); static const unsigned fibbMax = 20; // get index into sequence unsigned fibb = std::rand() % fibbMax; std::cout << "Fibonacci(" << fibb << ") = "; std::cout << fibonacci(fibb); std::cout << std::endl; } But now the size of the table is very hard coded and it wouldn't be easy to expand it to say, 40. The only one I came up with that has a similiar method of query is this: template <int TableSize = 40> class FibonacciTable { public: enum { max = TableSize }; static unsigned get(unsigned index) { if (index == TableSize) { return Fibonacci<TableSize>::value; } else { // too far, pass downwards return FibonacciTable<TableSize - 1>::get(index); } } }; template <> class FibonacciTable<0> { public: enum { max = 0 }; static unsigned get(unsigned) { // doesn't matter, no where else to go. // must be 0, or the original value was // not in table return 0; } }; int main(void) { std::srand(static_cast<unsigned>(std::time(0))); // get index into sequence unsigned fibb = std::rand() % FibonacciTable<>::max; std::cout << "Fibonacci(" << fibb << ") = "; std::cout << FibonacciTable<>::get(fibb); std::cout << std::endl; } Which seems to work great. The only two problems I see are: Potentially large call stack, since calculating Fibonacci<2 requires we go through TableMax all the way to 2, and: If the value is outside of the table, it returns zero as opposed to calculating it. So is there something I am missing? It seems there should be a better way to pick out these values at runtime. A template metaprogramming version of a switch statement perhaps, that generates a switch statement up to a certain number? Thanks in advance.

    Read the article

  • Driving me INSANE: Unable to Retrieve Metadata for

    - by Loren
    I've been spending the past 3 days trying to fix this problem I'm encountering - it's driving me insane... I'm not quite sure what is causing this bug - here are the details: MVC4 + Entity Framework 4.4 + MySql + POCO/Code First I'm setting up the above configuration .. here are my classes: namespace BTD.DataContext { public class BTDContext : DbContext { public BTDContext() : base("name=BTDContext") { } protected override void OnModelCreating(DbModelBuilder modelBuilder) { base.OnModelCreating(modelBuilder); //modelBuilder.Conventions.Remove<System.Data.Entity.Infrastructure.IncludeMetadataConvention>(); } public DbSet<Product> Products { get; set; } public DbSet<ProductImage> ProductImages { get; set; } } } namespace BTD.Data { [Table("Product")] public class Product { [Key] public long ProductId { get; set; } [DisplayName("Manufacturer")] public int? ManufacturerId { get; set; } [Required] [StringLength(150)] public string Name { get; set; } [Required] [DataType(DataType.MultilineText)] public string Description { get; set; } [Required] [StringLength(120)] public string URL { get; set; } [Required] [StringLength(75)] [DisplayName("Meta Title")] public string MetaTitle { get; set; } [DataType(DataType.MultilineText)] [DisplayName("Meta Description")] public string MetaDescription { get; set; } [Required] [StringLength(25)] public string Status { get; set; } [DisplayName("Create Date/Time")] public DateTime CreateDateTime { get; set; } [DisplayName("Edit Date/Time")] public DateTime EditDateTime { get; set; } } [Table("ProductImage")] public class ProductImage { [Key] public long ProductImageId { get; set; } public long ProductId { get; set; } public long? ProductVariantId { get; set; } [Required] public byte[] Image { get; set; } public bool PrimaryImage { get; set; } public DateTime CreateDateTime { get; set; } public DateTime EditDateTime { get; set; } } } Here is my web.config setup... <connectionStrings> <add name="BTDContext" connectionString="Server=localhost;Port=3306;Database=btd;User Id=root;Password=mypassword;" providerName="MySql.Data.MySqlClient" /> </connectionStrings> The database AND tables already exist... I'm still pretty new with mvc but was using this tutorial The application builds fine.. however when I try to add a controller using Product (BTD.Data) as my model class and BTDContext (BTD.DataContext) as my data context class I receive the following error: Unable to retrieve metadata for BTD.Data.Product using the same DbCompiledModel to create context against different types of database servers is not supported. Instead, create a separate DbCompiledModel for each type of server being used. I am at a complete loss - I've scoured google with almost every different variation of that error message above I can think of but to no avail. Here are the things i can verify... MySql is working properly I'm using MySql Connector version 6.5.4 and have created other ASP.net web forms + entity framework applications with ZERO problems I have also tried including/removing this in my web.config: <system.data> <DbProviderFactories> <remove invariant="MySql.Data.MySqlClient"/> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.5.4.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> I've literally been working on this bug for days - I'm to the point now that I would be willing to pay someone to solve it.. no joke... I'd really love to use MVC 4 and Razor - I was so excited to get started on this, but now i'm pretty discouraged - I truly appreciate any help/guidance on this! Also note - i'm using Entityframework from Nuget... Another Note I was using the default visual studio template that creates your MVC project with the account pages and other stuff. I JUST removed all references to the added files because they were trying to use the "DefaultConnection" which didn't exist - so i thought those files may be what was causing the error - however still no luck after removing them - I just wanted to let everyone know i'm using the visual studio MVC project template which pre-creates a bunch of files. I will be trying to recreate this all from a blank MVC project which doesn't have those files - i will update this once i test that Other References It appears someone else is having the same issues I am - the only difference is they are using sql server - I tried tweaking all my code to follow the suggestions on this stackoverflow question/answer here but still to no avail

    Read the article

  • Help:Graph contest problem: maybe a modified Dijkstra or another alternative algorithm

    - by newba
    Hi you all, I'm trying to do this contest exercise about graphs: XPTO is an intrepid adventurer (a little too temerarious for his own good) who boasts about exploring every corner of the universe, no matter how inhospitable. In fact, he doesn't visit the planets where people can easily live in, he prefers those where only a madman would go with a very good reason (several millions of credits for instance). His latest exploit is trying to survive in Proxima III. The problem is that Proxima III suffers from storms of highly corrosive acids that destroy everything, including spacesuits that were especially designed to withstand corrosion. Our intrepid explorer was caught in a rectangular area in the middle of one of these storms. Fortunately, he has an instrument that is capable of measuring the exact concentration of acid on each sector and how much damage it does to his spacesuit. Now, he only needs to find out if he can escape the storm. Problem The problem consists of finding an escape route that will allow XPTOto escape the noxious storm. You are given the initial energy of the spacesuit, the size of the rectangular area and the damage that the spacesuit will suffer while standing in each sector. Your task is to find the exit sector, the number of steps necessary to reach it and the amount of energy his suit will have when he leaves the rectangular area. The escape route chosen should be the safest one (i.e., the one where his spacesuit will be the least damaged). Notice that Rodericus will perish if the energy of his suit reaches zero. In case there are more than one possible solutions, choose the one that uses the least number of steps. If there are at least two sectors with the same number of steps (X1, Y1) and (X2, Y2) then choose the first if X1 < X2 or if X1 = X2 and Y1 < Y2. Constraints 0 < E = 30000 the suit's starting energy 0 = W = 500 the rectangle's width 0 = H = 500 rectangle's height 0 < X < W the starting X position 0 < Y < H the starting Y position 0 = D = 10000 the damage sustained in each sector Input The first number given is the number of test cases. Each case will consist of a line with the integers E, X and Y. The following line will have the integers W and H. The following lines will hold the matrix containing the damage D the spacesuit will suffer whilst in the corresponding sector. Notice that, as is often the case for computer geeks, (1,1) corresponds to the upper left corner. Output If there is a solution, the output will be the remaining energy, the exit sector's X and Y coordinates and the number of steps of the route that will lead Rodericus to safety. In case there is no solution, the phrase Goodbye cruel world! will be written. Sample Input 3 40 3 3 7 8 12 11 12 11 3 12 12 12 11 11 12 2 1 13 11 11 12 2 13 2 14 10 11 13 3 2 1 12 10 11 13 13 11 12 13 12 12 11 13 11 13 12 13 12 12 11 11 11 11 13 13 10 10 13 11 12 8 3 4 7 6 4 3 3 2 2 3 2 2 5 2 2 2 3 3 2 1 2 2 3 2 2 4 3 3 2 2 4 1 3 1 4 3 2 3 1 2 2 3 3 0 3 4 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Sample Output 12 5 1 8 Goodbye cruel world! 5 1 4 2 Basically, I think we have to do a modified Dijkstra, in which the distance between nodes is the suit's energy (and we have to subtract it instead of suming up like is normal with distances) and the steps are the ....steps made along the path. The pos with the bester binomial (Energy,num_Steps) is our "way out". Important : XPTO obviously can't move in diagonals, so we have to cut out this cases. I have many ideas, but I have such a problem implementing them... Could someone please help me thinking about this with some code or, at least, ideas? Am I totally wrong?

    Read the article

  • Backing up my data causes my server to crash using Symantec Backup Exec 12, or How I Came to Loathe

    - by Kyle Noland
    I have a Dell PowerEdge 2850 running Windows Server 2003. It is the primary file server for one of my clients. I have another server also running Windows Server 2003 that acts as the core media server for Symantec Backup Exec 12. I recently upgraded from Backup Exec 11d to 12. This upgrade was necessary because we also just upgraded from Exchange 2003 to Exchange 2007. After the upgrade I had to push-install the new version 12 Backup Exec Remote Agents to each of the servers I am backing up (about 6 total). 5 of my servers are doing just fine, faithfully completing backups every night. My file server routinely crashes. Observations: When the server crashes, it does not blue screen, it just locks up completely. Even the mouse is unresponsive. If you leave the server locked up long enough, it will eventually reboot itself and hang on the Windows splash screen. There is absolutely zero useful Event Viewer evidence of a problem. The logs go from routine logging to an Unexplained Shutdown Event the next morning when I have to hard reset the server to get it to boot. 90% of the time the server does not boot cleanly, it hangs on the Windows splash screen. I don't have any light to shed here. When the server hangs all I can do is hard reset it and try again. Even after a successful boot and chkdsk /r operation, if you reboot the machine, you have a 90% chance it won't back up again cleanly. The back story: This server started crashing during nightly backups about a month ago. I tried everything I could think of to troubleshoot the problem and eventually had to give up because I could not keep coming to the office at 4 AM to try to get the server back online. One Friday I got lucky and the server stayed up for its entire full backup. I took this opportunity to restore the full backup to a temporary server I set up and switched all my users to the temporary. Then I reloaded the ailing file server. I kept all my users on the temporary file server for about 3 weeks. I installed the same Backup Exec Remote Agent and Trend Micro A/V client on the temporary server that I was using on the regular file server. During this time, I had absolutely no problems backing up the temporary server. I tested the reloaded file server extensively. I rebooted the server once an hour every day for 3 weeks trying to make it fail. It never did. I felt confident that the reload was the answer to my problems. I moved all of the data from the temporary server back to the regular server. I got 3 nightly backups out of it before it locked up again and started the familiar failure to boot cleanly behavior. This weekend I decided to monitor the file server through the entire backup job. I RDPd into the file server and also into the server running Backup Exec. On the file server I opened the Task Manager so I could view the processes and watch CPU and memory usage. Everything was running smoothly for about 60GB worth of backup. Then I noticed that the byte count of the backup job in Backup Exec had stopped progressing. I looked back over at my RDP session into the file server, and I was getting real time updates about CPU and memory usage still - both nearly 0%, which is unusual. Backups usually hover around 40% usage for the duration of the backup job. Let me reiterate this point: The screen was refreshing and I was getting real time Task Manager updates - until I clicked on the Start menu. The screen went black and the server locked up. In truth, I think the server had already locked up, the video card just hadn't figured it out yet. I went back into my bag of trick: driving to the office and hard reseting the server over and over again when it hangs up at the Windows splash screen. I did this for 2 hours without getting a successful boot. I started panicking because I did not have a decent backup to use to get everything back onto the working temporary file server. Once I exhausted everything I knew to do, I took a deep breath, booted to the Windows Server 2003 CD and performed a repair installation of Windows. The server came back up fine, with all of my data intact. I can now reboot the server at will and it will come back up cleanly. The problem is that I'm afraid as soon as I try to back that data up again I will back at square one. So let me sum things up: Here is what I've done so far to troubleshoot this server: Deleted and recreated the RAID 5 sets. Initialized the drives. Reloaded the server with a fresh Server 2003 install. Confirmed with Dell that I have installed the latest, Dell approved BIOS and NIC drivers. Uninstalled / reinstalled the Backup Exec Remote Agent. Uninstalled the Trend Micro A/V client. Configured the server not to reboot itself after a blue screen so I can see any stop error. I used to think the server was blue screening, but since I enabled this setting I now know that the server just completely locks up. Run chkdsk /r from the Windows Recovery Console. Several errors were found and corrected, but did not help my problem. Help confirm or deny the following assumptions: There are two problems at work here. Why the server is locking up in the first place, and why the server won't boot cleanly after a lockup. This is ultimately a software problem. The server works fine and can be rebooted cleanly all day long - until the first lockup - following a fresh OS load or even a Repair installation. This is not a problem with Backup Exec in general. All of my other servers back up just fine. For the record, all of the other servers run Server 2003, and some of them house more data than the file server in question here. Any help is appreciated. The irony is almost too much to bear. Backing up my data is what is jeopardizing it.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • Asp.net Google Charts SSL handler for GeoMap

    - by Ian
    Hi All, I am trying to view Google charts in a site using SSL. Google Charts do not support SSL so if we use the standard charts, we get warning messages. My plan is to create a ASHX handler that is co9ntained in the secure site that will retrieve the content from Google and serve this to the page the user is viewing. Using VS 2008 SP1 and the included web server, my idea works perfectly for both Firefox and IE 8 & 9(Preview) and I am able to see my geomap displayed on my page as it should be. But my problem is when I publish to IIS7 the page using my handler to generate the geomap works in Firefox but not IE(every version). There are no errors anywhere or in any log files, but when i right click in IE in the area where the map should be displayed, I see the message in the context menu saying "movie not loaded" Below is the code from my handler and the aspx page. I have disabled compression in my web.config. Even in IE I am hitting all my break points and when I use the IE9 Developer tools, the web page is correctly generated with all the correct code, url's and references. If you have any better ways to accomplish this or how i can fix my problem, I will appreciate it. Thanks Ian Handler(ASHX) public void ProcessRequest(HttpContext context) { String url = "http://charts.apis.google.com/jsapi"; string query = context.Request.QueryString.ToString(); if (!string.IsNullOrEmpty(query)) { url = query; } HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(new Uri(HttpUtility.UrlDecode(url))); request.UserAgent = context.Request.UserAgent; WebResponse response = request.GetResponse(); string PageContent = string.Empty; StreamReader Reader; Stream webStream = response.GetResponseStream(); string contentType = response.ContentType; context.Response.BufferOutput = true; context.Response.ContentType = contentType; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.Cache.SetNoServerCaching(); context.Response.Cache.SetMaxAge(System.TimeSpan.Zero); string newUrl = IanLearning.Properties.Settings.Default.HandlerURL; //"https://localhost:444/googlesecurecharts.ashx?"; if (response.ContentType.Contains("javascript")) { Reader = new StreamReader(webStream); PageContent = Reader.ReadToEnd(); PageContent = PageContent.Replace("http://", newUrl + "http://"); PageContent = PageContent.Replace("charts.apis.google.com", newUrl + "charts.apis.google.com"); PageContent = PageContent.Replace(newUrl + "http://maps.google.com/maps/api/", "http://maps.google.com/maps/api/"); context.Response.Write(PageContent); } else { { byte[] bytes = ReadFully(webStream); context.Response.BinaryWrite(bytes); } } context.Response.Flush(); response.Close(); webStream.Close(); context.Response.End(); context.ApplicationInstance.CompleteRequest(); } ASPX Page <%@ Page Title="" Language="C#" MasterPageFile="~/Site2.Master" AutoEventWireup="true" CodeBehind="googlechart.aspx.cs" Inherits="IanLearning.googlechart" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" runat="server"> <script type='text/javascript' src='~/googlesecurecharts.ashx?'></script> <script type='text/javascript'> google.load('visualization', '1', { 'packages': ['geomap'] }); google.setOnLoadCallback(drawMap); var geomap; function drawMap() { var data = new google.visualization.DataTable(); data.addRows(6); data.addColumn('string', 'City'); data.addColumn('number', 'Sales'); data.setValue(0, 0, 'ZA'); data.setValue(0, 1, 200); data.setValue(1, 0, 'US'); data.setValue(1, 1, 300); data.setValue(2, 0, 'BR'); data.setValue(2, 1, 400); data.setValue(3, 0, 'CN'); data.setValue(3, 1, 500); data.setValue(4, 0, 'IN'); data.setValue(4, 1, 600); data.setValue(5, 0, 'ZW'); data.setValue(5, 1, 700); var options = {}; options['region'] = 'world'; options['dataMode'] = 'regions'; options['showZoomOut'] = false; var container = document.getElementById('map_canvas'); geomap = new google.visualization.GeoMap(container); google.visualization.events.addListener( geomap, 'regionClick', function(e) { drillDown(e['region']); }); geomap.draw(data, options); }; function drillDown(regionData) { alert(regionData); } </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <div id='map_canvas'> </div> </asp:Content>

    Read the article

  • Troubleshooting latency spikes on ESXi NFS datastores

    - by exo_cw
    I'm experiencing fsync latencies of around five seconds on NFS datastores in ESXi, triggered by certain VMs. I suspect this might be caused by VMs using NCQ/TCQ, as this does not happen with virtual IDE drives. This can be reproduced using fsync-tester (by Ted Ts'o) and ioping. For example using a Grml live system with a 8GB disk: Linux 2.6.33-grml64: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 5.0391 fsync time: 5.0438 fsync time: 5.0300 fsync time: 0.0231 fsync time: 0.0243 fsync time: 5.0382 fsync time: 5.0400 [... goes on like this ...] That is 5 seconds, not milliseconds. This is even creating IO-latencies on a different VM running on the same host and datastore: root@grml /mnt/sda/ioping-0.5 # ./ioping -i 0.3 -p 20 . 4096 bytes from . (reiserfs /dev/sda): request=1 time=7.2 ms 4096 bytes from . (reiserfs /dev/sda): request=2 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=3 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=4 time=0.9 ms 4096 bytes from . (reiserfs /dev/sda): request=5 time=4809.0 ms 4096 bytes from . (reiserfs /dev/sda): request=6 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=7 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=8 time=1.1 ms 4096 bytes from . (reiserfs /dev/sda): request=9 time=1.3 ms 4096 bytes from . (reiserfs /dev/sda): request=10 time=1.2 ms 4096 bytes from . (reiserfs /dev/sda): request=11 time=1.0 ms 4096 bytes from . (reiserfs /dev/sda): request=12 time=4950.0 ms When I move the first VM to local storage it looks perfectly normal: root@dynip211 /mnt/sda # ./fsync-tester fsync time: 0.0191 fsync time: 0.0201 fsync time: 0.0203 fsync time: 0.0206 fsync time: 0.0192 fsync time: 0.0231 fsync time: 0.0201 [... tried that for one hour: no spike ...] Things I've tried that made no difference: Tested several ESXi Builds: 381591, 348481, 260247 Tested on different hardware, different Intel and AMD boxes Tested with different NFS servers, all show the same behavior: OpenIndiana b147 (ZFS sync always or disabled: no difference) OpenIndiana b148 (ZFS sync always or disabled: no difference) Linux 2.6.32 (sync or async: no difference) It makes no difference if the NFS server is on the same machine (as a virtual storage appliance) or on a different host Guest OS tested, showing problems: Windows 7 64 Bit (using CrystalDiskMark, latency spikes happen mostly during preparing phase) Linux 2.6.32 (fsync-tester + ioping) Linux 2.6.38 (fsync-tester + ioping) I could not reproduce this problem on Linux 2.6.18 VMs. Another workaround is to use virtual IDE disks (vs SCSI/SAS), but that is limiting performance and the number of drives per VM. Update 2011-06-30: The latency spikes seem to happen more often if the application writes in multiple small blocks before fsync. For example fsync-tester does this (strace output): pwrite(3, "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"..., 1048576, 0) = 1048576 fsync(3) = 0 ioping does this while preparing the file: [lots of pwrites] pwrite(3, "********************************"..., 4096, 1036288) = 4096 pwrite(3, "********************************"..., 4096, 1040384) = 4096 pwrite(3, "********************************"..., 4096, 1044480) = 4096 fsync(3) = 0 The setup phase of ioping almost always hangs, while fsync-tester sometimes works fine. Is someone capable of updating fsync-tester to write multiple small blocks? My C skills suck ;) Update 2011-07-02: This problem does not occur with iSCSI. I tried this with the OpenIndiana COMSTAR iSCSI server. But iSCSI does not give you easy access to the VMDK files so you can move them between hosts with snapshots and rsync. Update 2011-07-06: This is part of a wireshark capture, captured by a third VM on the same vSwitch. This all happens on the same host, no physical network involved. I've started ioping around time 20. There were no packets sent until the five second delay was over: No. Time Source Destination Protocol Info 1082 16.164096 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1085), FH:0x3eb56466 Offset:0 Len:84 FILE_SYNC 1083 16.164112 192.168.250.10 192.168.250.20 NFS V3 WRITE Call (Reply In 1086), FH:0x3eb56f66 Offset:0 Len:84 FILE_SYNC 1084 16.166060 192.168.250.20 192.168.250.10 TCP nfs > iclcnet-locate [ACK] Seq=445 Ack=1057 Win=32806 Len=0 TSV=432016 TSER=769110 1085 16.167678 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1082) Len:84 FILE_SYNC 1086 16.168280 192.168.250.20 192.168.250.10 NFS V3 WRITE Reply (Call In 1083) Len:84 FILE_SYNC 1087 16.168417 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1057 Ack=773 Win=4163 Len=0 TSV=769110 TSER=432016 1088 23.163028 192.168.250.10 192.168.250.20 NFS V3 GETATTR Call (Reply In 1089), FH:0x0bb04963 1089 23.164541 192.168.250.20 192.168.250.10 NFS V3 GETATTR Reply (Call In 1088) Directory mode:0777 uid:0 gid:0 1090 23.274252 192.168.250.10 192.168.250.20 TCP iclcnet-locate > nfs [ACK] Seq=1185 Ack=889 Win=4163 Len=0 TSV=769821 TSER=432716 1091 24.924188 192.168.250.10 192.168.250.20 RPC Continuation 1092 24.924210 192.168.250.10 192.168.250.20 RPC Continuation 1093 24.924216 192.168.250.10 192.168.250.20 RPC Continuation 1094 24.924225 192.168.250.10 192.168.250.20 RPC Continuation 1095 24.924555 192.168.250.20 192.168.250.10 TCP nfs > iclcnet_svinfo [ACK] Seq=6893 Ack=1118613 Win=32625 Len=0 TSV=432892 TSER=769986 1096 24.924626 192.168.250.10 192.168.250.20 RPC Continuation 1097 24.924635 192.168.250.10 192.168.250.20 RPC Continuation 1098 24.924643 192.168.250.10 192.168.250.20 RPC Continuation 1099 24.924649 192.168.250.10 192.168.250.20 RPC Continuation 1100 24.924653 192.168.250.10 192.168.250.20 RPC Continuation 2nd Update 2011-07-06: There seems to be some influence from TCP window sizes. I was not able to reproduce this problem using FreeNAS (based on FreeBSD) as a NFS server. The wireshark captures showed TCP window updates to 29127 bytes in regular intervals. I did not see them with OpenIndiana, which uses larger window sizes by default. I can no longer reproduce this problem if I set the following options in OpenIndiana and restart the NFS server: ndd -set /dev/tcp tcp_recv_hiwat 8192 # default is 128000 ndd -set /dev/tcp tcp_max_buf 1048575 # default is 1048576 But this kills performance: Writing from /dev/zero to a file with dd_rescue goes from 170MB/s to 80MB/s. Update 2011-07-07: I've uploaded this tcpdump capture (can be analyzed with wireshark). In this case 192.168.250.2 is the NFS server (OpenIndiana b148) and 192.168.250.10 is the ESXi host. Things I've tested during this capture: Started "ioping -w 5 -i 0.2 ." at time 30, 5 second hang in setup, completed at time 40. Started "ioping -w 5 -i 0.2 ." at time 60, 5 second hang in setup, completed at time 70. Started "fsync-tester" at time 90, with the following output, stopped at time 120: fsync time: 0.0248 fsync time: 5.0197 fsync time: 5.0287 fsync time: 5.0242 fsync time: 5.0225 fsync time: 0.0209 2nd Update 2011-07-07: Tested another NFS server VM, this time NexentaStor 3.0.5 community edition: Shows the same problems. Update 2011-07-31: I can also reproduce this problem on the new ESXi build 4.1.0.433742.

    Read the article

  • Problem with Variable Scoping in Rebol's Object

    - by Rebol Tutorial
    I have modified the rebodex app so that it can be called from rebol's console any time by typing rebodex. To show the title of the app, I need to store it in app-title: system/script/header/title so tha it could be used later in view/new/title dex reform [self/app-title version] That works but as you can see I have named the var name "app-title", but if I use "title" instead, the window caption would show weird stuff (vid code). Why ? REBOL [ Title: "Rebodex" Date: 23-May-2010 Version: 2.1.1 File: %rebodex.r Author: "Carl Sassenrath" Modification: "Rebtut" Purpose: "A simple but useful address book contact database." Email: %carl--rebol--com library: [ level: 'intermediate platform: none type: 'tool domain: [file-handling DB GUI] tested-under: none support: none license: none see-also: none ] ] rebodex.context: context [ app-title: system/script/header/title version: system/script/header/version set 'rebodex func[][ names-path: %names.r ;data file name-list: none fields: [name company title work cell home car fax web email smail notes updat] names: either exists? names-path [load names-path][ [[name "Carl Sassenrath" title "Founder" company "REBOL Technologies" email "%carl--rebol--com" web "http://www.rebol.com"]] ] brws: [ if not empty? web/text [ if not find web/text "http://" [insert web/text "http://"] error? try [browse web/text] ] ] dial: [request [rejoin ["Dial number for " name/text "? (Not implemented.)"] "Dial" "Cancel"]] dex-styles: stylize [ lab: label 60x20 right bold middle font-size 11 btn: button 64x20 font-size 11 edge [size: 1x1] fld: field 200x20 font-size 11 middle edge [size: 1x1] inf: info font-size 11 middle edge [size: 1x1] ari: field wrap font-size 11 edge [size: 1x1] with [flags: [field tabbed]] ] dex-pane1: layout/offset [ origin 0 space 2x0 across styles dex-styles lab "Name" name: fld bold return lab "Title" title: fld return lab "Company" company: fld return lab "Email" email: fld return lab "Web" brws web: fld return lab "Address" smail: ari 200x72 return lab "Updated" updat: inf 200x20 return ] 0x0 updat/flags: none dex-pane2: layout/offset [ origin 0 space 2x0 across styles dex-styles lab "Work #" dial work: fld 140 return lab "Home #" dial home: fld 140 return lab "Cell #" dial cell: fld 140 return lab "Alt #" dial car: fld 140 return lab "Fax #" fax: fld 140 return lab "Notes" notes: ari 140x72 return pad 136x1 btn "Close" #"^q" [store-entry save-file unview] ] 0x0 dex: layout [ origin 8x8 space 0x1 styles dex-styles srch: fld 196x20 bold across rslt: list 180x150 [ nt: txt 178x15 middle font-size 11 [ store-entry curr: cnt find-name nt/text update-entry unfocus show dex ] ] supply [ cnt: count + scroll-off face/text: "" face/color: snow if not n: pick name-list cnt [exit] face/text: select n 'name face/font/color: black if curr = cnt [face/color: system/view/vid/vid-colors/field-select] ] sl: slider 16x150 [scroll-list] return return btn "New" #"^n" [new-name] btn "Del" #"^d" [delete-name unfocus update-entry search-all show dex] btn "Sort" [sort names sort name-list show rslt] return at srch/offset + (srch/size * 1x0) bx1: box dex-pane1/size bx2: box dex-pane2/size return ] bx1/pane: dex-pane1/pane bx2/pane: dex-pane2/pane rslt/data: [] this-name: first names name-list: copy names curr: none search-text: "" scroll-off: 0 srch/feel: make srch/feel [ redraw: func [face act pos][ face/color: pick face/colors face system/view/focal-face if all [face = system/view/focal-face face/text search-text] [ search-text: copy face/text search-all if 1 = length? name-list [this-name: first name-list update-entry show dex] ] ] ] update-file: func [data] [ set [path file] split-path names-path if not exists? path [make-dir/deep path] write names-path data ] save-file: has [buf] [ buf: reform [{REBOL [Title: "Name Database" Date:} now "]^/[^/"] foreach n names [repend buf [mold n newline]] update-file append buf "]" ] delete-name: does [ remove find/only names this-name if empty? names [append-empty] save-file new-name ] clean-names: function [][n][ forall names [ if any [empty? first names none? n: select first names 'name empty? n][ remove names ] ] names: head names ] search-all: function [] [ent flds] [ clean-names clear name-list flds: [name] either empty? search-text [insert name-list names][ foreach nam names [ foreach word flds [ if all [ent: select nam word find ent search-text][ append/only name-list nam break ] ] ] ] scroll-off: 0 sl/data: 0 resize-drag scroll-list curr: none show [rslt sl] ] new-name: does [ store-entry clear-entry search-all append-empty focus name ; update-entry ] append-empty: does [append/only names this-name: copy []] find-name: function [str][] [ foreach nam names [ if str = select nam 'name [ this-name: nam break ] ] ] store-entry: has [val ent flag] [ flag: 0 if not empty? trim name/text [ foreach word fields [ val: trim get in get word 'text either ent: select this-name word [ if ent val [insert clear ent val flag: flag + 1] ][ if not empty? val [repend this-name [word copy val] flag: flag + 1] ] if flag = 1 [flag: 2 updat/text: form now] ] if not zero? flag [save-file] ] ] update-entry: does [ foreach word fields [ insert clear get in get word 'text any [select this-name word ""] ] show rslt ] clear-entry: does [ clear-fields bx1 clear-fields bx2 updat/text: form now unfocus show dex ] show-names: does [ clear rslt/data foreach n name-list [ if n/name [append rslt/data n/name] ] show rslt ] scroll-list: does [ scroll-off: max 0 to-integer 1 + (length? name-list) - (100 / 16) * sl/data show rslt ] do resize-drag: does [sl/redrag 100 / max 1 (16 * length? name-list)] center-face dex new-name focus srch show-names view/new/title dex reform [app-title version] insert-event-func [ either all [event/type = 'close event/face = dex][ store-entry unview ][event] ] do-events ] ]

    Read the article

  • Matlab Image watermarking question , using both SVD and DWT

    - by Georgek
    Hello all . here is a code that i got over the net ,and it is supposed to embed a watermark of size(50*20) called _copyright.bmp in the Code below . the size of the cover object is (512*512), it is called _lena_std_bw.bmp.What we did here is we did DWT2 2 times for the image , when we reached our second dwt2 cA2 size is 128*128. You should notice that the blocksize and it equals 4, it is used to determine the max msg size based on cA2 according to the following code:max_message=RcA2*CcA2/(blocksize^2). in our current case max_message would equal 128*128/(4^2)=1024. i want to embed a bigger watermark in the 2nd dwt2 and lets say the size of that watermark is 400*10(i can change the dimension using MS PAINT), what i have to do is change the size of the blocksize to 2. so max_message=4096.Matlab gives me 3 errors and they are : ??? Error using == plus Matrix dimensions must agree. Error in == idwt2 at 93 x = upsconv2(a,{Lo_R,Lo_R},sx,dwtEXTM,shift)+ ... % Approximation. Error in == two_dwt_svd_low_low at 88 CAA1 = idwt2(cA22,cH2,cV2,cD2,'haar',[RcA1,CcA1]); The origional Code is (the origional code where blocksize =4): %This algorithm makes DWT for the whole image and after that make DWT for %cH1 and make SVD for cH2 and embed the watermark in every level after SVD %(1) -------------- Embed Watermark ------------------------------------ %Add the watermar W to original image I and give the watermarked image in J %-------------------------------------------------------------------------- % set the gain factor for embeding and threshold for evaluation clc; clear all; close all; % save start time start_time=cputime; % set the value of threshold and alpha thresh=.5; alpha =0.01; % read in the cover object file_name='_lena_std_bw.bmp'; cover_object=double(imread(file_name)); % determine size of watermarked image Mc=size(cover_object,1); %Height Nc=size(cover_object,2); %Width % read in the message image and reshape it into a vector file_name='_copyright.bmp'; message=double(imread(file_name)); T=message; Mm=size(message,1); %Height Nm=size(message,2); %Width % perform 1-level DWT for the whole cover image [cA1,cH1,cV1,cD1] = dwt2(cover_object,'haar'); % determine the size of cA1 [RcA1 CcA1]=size(cA1) % perform 2-level DWT for cA1 [cA2,cH2,cV2,cD2] = dwt2(cA1,'haar'); % determine the size of cA2 [RcA2 CcA2]=size(cA2) % set the value of blocksize blocksize=4 % reshape the watermark to a vector message_vector=round(reshape(message,Mm*Nm,1)./256); W=message_vector; % determine maximum message size based on cA2, and blocksize max_message=RcA2*CcA2/(blocksize^2) % check that the message isn't too large for cover if (length(message) max_message) error('Message too large to fit in Cover Object') end %----------------------- process the image in blocks ---------------------- x=1; y=1; for (kk = 1:length(message_vector)) [cA2u cA2s cA2v]=svd(cA2(y:y+blocksize-1,x:x+blocksize-1)); % if message bit contains zero, modify S of the original image if (message_vector(kk) == 0) cA2s = cA2s*(1 + alpha); % otherwise mask is filled with zeros else cA2s=cA2s; end cA22(y:y+blocksize-1,x:x+blocksize-1)=cA2u*cA2s*cA2v; % move to next block of mask along x; If at end of row, move to next row if (x+blocksize) >= CcA2 x=1; y=y+blocksize; else x=x+blocksize; end end % perform IDWT CAA1 = idwt2(cA22,cH2,cV2,cD2,'haar',[RcA1,CcA1]); watermarked_image= idwt2(CAA1,cH1,cV1,cD1,'haar',[Mc,Nc]); % convert back to uint8 watermarked_image_uint8=uint8(watermarked_image); % write watermarked Image to file imwrite(watermarked_image_uint8,'dwt_watermarked.bmp','bmp'); % display watermarked image figure(1) imshow(watermarked_image_uint8,[]) title('Watermarked Image') %(2) ---------------------------------------------------------------------- %---------- Extract Watermark from attacked watermarked image ------------- %-------------------------------------------------------------------------- % read in the watermarked object file_name='dwt_watermarked.bmp'; watermarked_image=double(imread(file_name)); % determine size of watermarked image Mw=size(watermarked_image,1); %Height Nw=size(watermarked_image,2); %Width % perform 1-level DWT for the whole watermarked image [ca1,ch1,cv1,cd1] = dwt2(watermarked_image,'haar'); % determine the size of ca1 [Rca1 Cca1]=size(ca1); % perform 2-level DWT for ca1 [ca2,ch2,cv2,cd2] = dwt2(ca1,'haar'); % determine the size of ca2 [Rca2 Cca2]=size(ca2); % process the image in blocks % for each block get a bit for message x=1; y=1; for (kk = 1:length(message_vector)) % sets correlation to 1 when patterns are identical to avoid /0 errors % otherwise calcluate difference between the cover image and the % watermarked image [cA2u cA2s cA2v]=svd(cA2(y:y+blocksize-1,x:x+blocksize-1)); [ca2u1 ca2s1 ca2v1]=svd(ca2(y:y+blocksize-1,x:x+blocksize-1)); correlation(kk)=diag(ca2s1-cA2s)'*diag(ca2s1-cA2s)/(alpha*alpha)/(diag(cA2s)*diag(cA2s)); % move on to next block. At and of row move to next row if (x+blocksize) >= Cca2 x=1; y=y+blocksize; else x=x+blocksize; end end % if correlation exceeds average correlation correlation(kk)=correlation(kk)+mean(correlation(1:Mm*Nm)); for kk = 1:length(correlation) if (correlation(kk) > thresh*alpha);%thresh*mean(correlation(1:Mo*No))) message_vector(kk)=0; end end % reshape the message vector and display recovered watermark. figure(2) message=reshape(message_vector(1:Mm*Nm),Mm,Nm); imshow(message,[]) title('Recovered Watermark') % display processing time elapsed_time=cputime-start_time, please do help,its my graduation project and i have been trying this code for along time but failed miserable. Thanks in advance

    Read the article

  • rotating bitmaps. In code.

    - by Marco van de Voort
    Is there a faster way to rotate a large bitmap by 90 or 270 degrees than simply doing a nested loop with inverted coordinates? The bitmaps are 8bpp and typically 2048*2400*8bpp Currently I do this by simply copying with argument inversion, roughly (pseudo code: for x = 0 to 2048-1 for y = 0 to 2048-1 dest[x][y]=src[y][x]; (In reality I do it with pointers, for a bit more speed, but that is roughly the same magnitude) GDI is quite slow with large images, and GPU load/store times for textures (GF7 cards) are in the same magnitude as the current CPU time. Any tips, pointers? An in-place algorithm would even be better, but speed is more important than being in-place. Target is Delphi, but it is more an algorithmic question. SSE(2) vectorization no problem, it is a big enough problem for me to code it in assembler Duplicates How do you rotate a two dimensional array?. Follow up to Nils' answer Image 2048x2700 - 2700x2048 Compiler Turbo Explorer 2006 with optimization on. Windows: Power scheme set to "Always on". (important!!!!) Machine: Core2 6600 (2.4 GHz) time with old routine: 32ms (step 1) time with stepsize 8 : 12ms time with stepsize 16 : 10ms time with stepsize 32+ : 9ms Meanwhile I also tested on a Athlon 64 X2 (5200+ iirc), and the speed up there was slightly more than a factor four (80 to 19 ms). The speed up is well worth it, thanks. Maybe that during the summer months I'll torture myself with a SSE(2) version. However I already thought about how to tackle that, and I think I'll run out of SSE2 registers for an straight implementation: for n:=0 to 7 do begin load r0, <source+n*rowsize> shift byte from r0 into r1 shift byte from r0 into r2 .. shift byte from r0 into r8 end; store r1, <target> store r2, <target+1*<rowsize> .. store r8, <target+7*<rowsize> So 8x8 needs 9 registers, but 32-bits SSE only has 8. Anyway that is something for the summer months :-) Note that the pointer thing is something that I do out of instinct, but it could be there is actually something to it, if your dimensions are not hardcoded, the compiler can't turn the mul into a shift. While muls an sich are cheap nowadays, they also generate more register pressure afaik. The code (validated by subtracting result from the "naieve" rotate1 implementation): const stepsize = 32; procedure rotatealign(Source: tbw8image; Target:tbw8image); var stepsx,stepsy,restx,resty : Integer; RowPitchSource, RowPitchTarget : Integer; pSource, pTarget,ps1,ps2 : pchar; x,y,i,j: integer; rpstep : integer; begin RowPitchSource := source.RowPitch; // bytes to jump to next line. Can be negative (includes alignment) RowPitchTarget := target.RowPitch; rpstep:=RowPitchTarget*stepsize; stepsx:=source.ImageWidth div stepsize; stepsy:=source.ImageHeight div stepsize; // check if mod 16=0 here for both dimensions, if so -> SSE2. for y := 0 to stepsy - 1 do begin psource:=source.GetImagePointer(0,y*stepsize); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(target.imagewidth-(y+1)*stepsize,0); for x := 0 to stepsx - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; // 3 more areas to do, with dimensions // - stepsy*stepsize * restx // right most column of restx width // - stepsx*stepsize * resty // bottom row with resty height // - restx*resty // bottom-right rectangle. restx:=source.ImageWidth mod stepsize; // typically zero because width is // typically 1024 or 2048 resty:=source.Imageheight mod stepsize; if restx>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(source.ImageWidth-restx,0); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(Target.imagewidth-stepsize,Target.imageheight-restx); for y := 0 to stepsy - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize*RowPitchSource); dec(ptarget,stepsize); end; end; if resty>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(0,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,0); for x := 0 to stepsx - 1 do begin for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; if (resty>0) and (restx>0) then begin // another loop less, since only one block psource:=source.GetImagePointer(source.ImageWidth-restx,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,target.ImageHeight-restx); for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; end; end;

    Read the article

  • Accessing py2exe program over network in Windows 98 throws ImportErrors

    - by darvids0n
    I'm running a py2exe-compiled python program from one server machine on a number of client machines (mapped to a network drive on every machine, say W:). For Windows XP and later machines, have so far had zero problems with Python picking up W:\python23.dll (yes, I'm using Python 2.3.5 for W98 compatibility and all that). It will then use W:\zlib.pyd to decompress W:\library.zip containing all the .pyc files like os and such, which are then imported and the program runs no problems. The issue I'm getting is on some Windows 98 SE machines (note: SOME Windows 98 SE machines, others seem to work with no apparent issues). What happens is, the program runs from W:, the W:\python23.dll is, I assume, found (since I'm getting Python ImportErrors, we'd need to be able to execute a Python import statement), but a couple of things don't work: 1) If W:\library.zip contains the only copy of the .pyc files, I get ZipImportError: can't decompress data; zlib not available (nonsense, considering W:\zlib.pyd IS available and works fine with the XP and higher machines on the same network). 2) If the .pyc files are actually bundled INSIDE the python exe by py2exe, OR put in the same directory as the .exe, OR put into a named subdirectory which is then set as part of the PYTHONPATH variable (e.g W:\pylib), I get ImportError: no module named os (os is the first module imported, before sys and anything else). Come to think of it, sys.path wouldn't be available to search if os was imported before it maybe? I'll try switching the order of those imports but my question still stands: Why is this a sporadic issue, working on some networks but not on others? And how would I force Python to find the files that are bundled inside the very executable I run? I have immediate access to the working Windows 98 SE machine, but I only get access to the non-working one (a customer of mine) every morning before their store opens. Thanks in advance! EDIT: Okay, big step forward. After debugging with PY2EXE_VERBOSE, the problem occurring on the specific W98SE machine is that it's not using the right path syntax when looking for imports. Firstly, it doesn't seem to read the PYTHONPATH environment variable (there may be a py2exe-specific one I'm not aware of, like PY2EXE_VERBOSE). Secondly, it only looks in one place before giving up (if the files are bundled inside the EXE, it looks there. If not, it looks in library.zip). EDIT 2: In fact, according to this, there is a difference between the sys.path in the Python interpreter and that of Py2exe executables. Specifically, sys.path contains only a single entry: the full pathname of the shared code archive. Blah. No fallbacks? Not even the current working directory? I'd try adding W:\ to PATH, but py2exe doesn't conform to any sort of standards for locating system libraries, so it won't work. Now for the interesting bit. The path it tries to load atexit, os, etc. from is: W:\\library.zip\<module>.<ext> Note the single slash after library.zip, but the double slash after the drive letter (someone correct me if this is intended and should work). It looks like if this is a string literal, then since the slash isn't doubled, it's read as an (invalid) escape sequence and the raw character is printed (giving W:\library.zipos.pyd, W:\library.zipos.dll, ... instead of with a slash); if it is NOT a string literal, the double slash might not be normpath'd automatically (as it should be) and so the double slash confuses the module loader. Like I said, I can't just set PYTHONPATH=W:\\library.zip\\ because it ignores that variable. It may be worth using sys.path.append at the start of my program but hard-coding module paths is an absolute LAST resort, especially since the problem occurs in ONE configuration of an outdated OS. Any ideas? I have one, which is to normpath the sys.path.. pity I need os for that. Another is to just append os.getenv('PATH') or os.getenv('PYTHONPATH') to sys.path... again, needing the os module. The site module also fails to initialise, so I can't use a .pth file. I also recently tried the following code at the start of the program: for pth in sys.path: fErr.write(pth) fErr.write(' to ') pth.replace('\\\\','\\') # Fix Windows 98 pathing issues fErr.write(pth) fErr.write('\n') But it can't load linecache.pyc, or anything else for that matter; it can't actually execute those commands from the looks of things. Is there any way to use built-in functionality which doesn't need linecache to modify the sys.path dynamically? Or am I reduced to hard-coding the correct sys.path?

    Read the article

  • problems in trying ieee 802.15.4 working from msk

    - by asel
    Hi, i took a msk code from dsplog.com and tried to modify it to test the ieee 802.15.4. There are several links on that site for ieee 802.15.4. Currently I am getting simulated ber results all approximately same for all the cases of Eb_No values. Can you help me to find why? thanks in advance! clear PN = [ 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0; 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0; 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0; 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1; 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1; 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0; 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1; 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1; 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1; 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1; 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1; 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0; 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0; 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1; 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0; 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0; ]; N = 5*10^5; % number of bits or symbols fsHz = 1; % sampling period T = 4; % symbol duration Eb_N0_dB = [0:10]; % multiple Eb/N0 values ct = cos(pi*[-T:N*T-1]/(2*T)); st = sin(pi*[-T:N*T-1]/(2*T)); for ii = 1:length(Eb_N0_dB) tx = []; % MSK Transmitter ipBit = round(rand(1,N/32)*15); for k=1:length(ipBit) sym = ipBit(k); tx = [tx PN((sym+1),1:end)]; end ipMod = 2*tx - 1; % BPSK modulation 0 -> -1, 1 -> 1 ai = kron(ipMod(1:2:end),ones(1,2*T)); % even bits aq = kron(ipMod(2:2:end),ones(1,2*T)); % odd bits ai = [ai zeros(1,T) ]; % padding with zero to make the matrix dimension match aq = [zeros(1,T) aq ]; % adding delay of T for Q-arm % MSK transmit waveform xt = 1/sqrt(T)*[ai.*ct + j*aq.*st]; % Additive White Gaussian Noise nt = 1/sqrt(2)*[randn(1,N*T+T) + j*randn(1,N*T+T)]; % white gaussian noise, 0dB variance % Noise addition yt = xt + 10^(-Eb_N0_dB(ii)/20)*nt; % additive white gaussian noise % MSK receiver % multiplying with cosine and sine waveforms xE = conv(real(yt).*ct,ones(1,2*T)); xO = conv(imag(yt).*st,ones(1,2*T)); bHat = zeros(1,N); bHat(1:2:end) = xE(2*T+1:2*T:end-2*T); % even bits bHat(2:2:end) = xO(3*T+1:2*T:end-T); % odd bits result=zeros(16,1); chiplen=32; seqstart=1; recovered = []; while(seqstart<length(bHat)) A = bHat(seqstart:seqstart+(chiplen-1)); for j=1:16 B = PN(j,1:end); result(j)=sum(A.*B); end [value,index] = max(result); recovered = [recovered (index-1)]; seqstart = seqstart+chiplen; end; %# create binary string - the 4 forces at least 4 bits bstr1 = dec2bin(ipBit,4); bstr2 = dec2bin(recovered,4); %# convert back to numbers (reshape so that zeros are preserved) out1 = str2num(reshape(bstr1',[],1))'; out2 = str2num(reshape(bstr2',[],1))'; % counting the errors nErr(ii) = size(find([out1 - out2]),2); end nErr/(length(ipBit)*4) % simulated ber theoryBer = 0.5*erfc(sqrt(10.^(Eb_N0_dB/10))) % theoretical ber

    Read the article

  • Repeated Squaring - Matrix Multiplication using NEWMAT

    - by Dinakar Kulkarni
    I'm trying to use the repeated squaring algorithm (using recursion) to perform matrix exponentiation. I've included header files from the NEWMAT library instead of using arrays. The original matrix has elements in the range (-5,5), all numbers being of type float. # include "C:\User\newmat10\newmat.h" # include "C:\User\newmat10\newmatio.h" # include "C:\User\newmat10\newmatap.h" # include <iostream> # include <time.h> # include <ctime> # include <cstdlib> # include <iomanip> using namespace std; Matrix repeated_squaring(Matrix A, int exponent, int n) //Recursive function { A(n,n); IdentityMatrix I(n); if (exponent == 0) //Matrix raised to zero returns an Identity Matrix return I; else { if ( exponent%2 == 1 ) // if exponent is odd return (A * repeated_squaring (A*A, (exponent-1)/2, n)); else //if exponent is even return (A * repeated_squaring( A*A, exponent/2, n)); } } Matrix direct_squaring(Matrix B, int k, int no) //Brute Force Multiplication { B(no,no); Matrix C = B; for (int i = 1; i <= k; i++) C = B*C; return C; } //----Creating a matrix with elements b/w (-5,5)---- float unifRandom() { int a = -5; int b = 5; float temp = (float)((b-a)*( rand()/RAND_MAX) + a); return temp; } Matrix initialize_mat(Matrix H, int ord) { H(ord,ord); for (int y = 1; y <= ord; y++) for(int z = 1; z<= ord; z++) H(y,z) = unifRandom(); return(H); } //--------------------------------------------------- void main() { int exponent, dimension; cout<<"Insert exponent:"<<endl; cin>>exponent; cout<< "Insert dimension:"<<endl; cin>>dimension; cout<<"The number of rows/columns in the square matrix is: "<<dimension<<endl; cout<<"The exponent is: "<<exponent<<endl; Matrix A(dimension,dimension),B(dimension,dimension); Matrix C(dimension,dimension),D(dimension,dimension); B= initialize_mat(A,dimension); cout<<"Initial Matrix: "<<endl; cout<<setw(5)<<setprecision(2)<<B<<endl; //----------------------------------------------------------------------------- cout<<"Repeated Squaring Result: "<<endl; clock_t time_before1 = clock(); C = repeated_squaring (B, exponent , dimension); cout<< setw(5) <<setprecision(2) <<C; clock_t time_after1 = clock(); float diff1 = ((float) time_after1 - (float) time_before1); cout << "It took " << diff1/CLOCKS_PER_SEC << " seconds to complete" << endl<<endl; //--------------------------------------------------------------------------------- cout<<"Direct Squaring Result:"<<endl; clock_t time_before2 = clock(); D = direct_squaring (B, exponent , dimension); cout<<setw(5)<<setprecision(2)<<D; clock_t time_after2 = clock(); float diff2 = ((float) time_after2 - (float) time_before2); cout << "It took " << diff2/CLOCKS_PER_SEC << " seconds to complete" << endl<<endl; } I face the following problems: The random number generator returns only "-5" as each element in the output. The Matrix multiplication yield different results with brute force multiplication and using the repeated squaring algorithm. I'm timing the execution time of my code to compare the times taken by brute force multiplication and by repeated squaring. Could someone please find out what's wrong with the recursion and with the matrix initialization? NOTE: While compiling this program, make sure you've imported the NEWMAT library. Thanks in advance!

    Read the article

  • linux thread synchronization

    - by johnnycrash
    I am new to linux and linux threads. I have spent some time googling to try to understand the differences between all the functions available for thread synchronization. I still have some questions. I have found all of these different types of synchronizations, each with a number of functions for locking, unlocking, testing the lock, etc. gcc atomic operations futexes mutexes spinlocks seqlocks rculocks conditions semaphores My current (but probably flawed) understanding is this: semaphores are process wide, involve the filesystem (virtually I assume), and are probably the slowest. Futexes might be the base locking mechanism used by mutexes, spinlocks, seqlocks, and rculocks. Futexes might be faster than the locking mechanisms that are based on them. Spinlocks dont block and thus avoid context swtiches. However they avoid the context switch at the expense of consuming all the cycles on a CPU until the lock is released (spinning). They should only should be used on multi processor systems for obvious reasons. Never sleep in a spinlock. The seq lock just tells you when you finished your work if a writer changed the data the work was based on. You have to go back and repeat the work in this case. Atomic operations are the fastest synch call, and probably are used in all the above locking mechanisms. You do not want to use atomic operations on all the fields in your shared data. You want to use a lock (mutex, futex, spin, seq, rcu) or a single atomic opertation on a lock flag when you are accessing multiple data fields. My questions go like this: Am I right so far with my assumptions? Does anyone know the cpu cycle cost of the various options? I am adding parallelism to the app so we can get better wall time response at the expense of running fewer app instances per box. Performances is the utmost consideration. I don't want to consume cpu with context switching, spinning, or lots of extra cpu cycles to read and write shared memory. I am absolutely concerned with number of cpu cycles consumed. Which (if any) of the locks prevent interruption of a thread by the scheduler or interrupt...or am I just an idiot and all synchonization mechanisms do this. What kinds of interruption are prevented? Can I block all threads or threads just on the locking thread's CPU? This question stems from my fear of interrupting a thread holding a lock for a very commonly used function. I expect that the scheduler might schedule any number of other workers who will likely run into this function and then block because it was locked. A lot of context switching would be wasted until the thread with the lock gets rescheduled and finishes. I can re-write this function to minimize lock time, but still it is so commonly called I would like to use a lock that prevents interruption...across all processors. I am writing user code...so I get software interrupts, not hardware ones...right? I should stay away from any functions (spin/seq locks) that have the word "irq" in them. Which locks are for writing kernel or driver code and which are meant for user mode? Does anyone think using an atomic operation to have multiple threads move through a linked list is nuts? I am thinking to atomicly change the current item pointer to the next item in the list. If the attempt works, then the thread can safely use the data the current item pointed to before it was moved. Other threads would now be moved along the list. futexes? Any reason to use them instead of mutexes? Is there a better way than using a condition to sleep a thread when there is no work? When using gcc atomic ops, specifically the test_and_set, can I get a performance increase by doing a non atomic test first and then using test_and_set to confirm? *I know this will be case specific, so here is the case. There is a large collection of work items, say thousands. Each work item has a flag that is initialized to 0. When a thread has exclusive access to the work item, the flag will be one. There will be lots of worker threads. Any time a thread is looking for work, they can non atomicly test for 1. If they read a 1, we know for certain that the work is unavailable. If they read a zero, they need to perform the atomic test_and_set to confirm. So if the atomic test_and_set is 500 cpu cycles because it is disabling pipelining, causes cpu's to communicate and L2 caches to flush/fill .... and a simple test is 1 cycle .... then as long as I had a better ratio of 500 to 1 when it came to stumbling upon already completed work items....this would be a win.* I hope to use mutexes or spinlocks to sparilngly protect sections of code that I want only one thread on the SYSTEM (not jsut the CPU) to access at a time. I hope to sparingly use gcc atomic ops to select work and minimize use of mutexes and spinlocks. For instance: a flag in a work item can be checked to see if a thread has worked it (0=no, 1=yes or in progress). A simple test_and_set tells the thread if it has work or needs to move on. I hope to use conditions to wake up threads when there is work. Thanks!

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

< Previous Page | 164 165 166 167 168 169 170 171 172 173  | Next Page >