Search Results

Search found 3286 results on 132 pages for 'embedded'.

Page 125/132 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Feedback on meeting of the Linux User Group of Mauritius

    Once upon a time in a country far far away... Okay, actually it's not that bad but it has been a while since the last meeting of the Linux User Group of Mauritius (LUGM). There have been plans in the past but it never really happened. Finally, Selven took the opportunity and organised a new meetup with low administrative overhead, proper scheduling on alternative dates and a small attendee's survey on the preferred option. All the pre-work was nicely executed. First, I wasn't sure whether it would be possible to attend. Luckily I got some additional information, like children should come, too, and I was sold to this community gathering. According to other long-term members of the LUGM it was the first time 'ever' that a gathering was organised outside of Quatre Bornes, and I have to admit it was great! LUGM - user group meeting on the 15.06.2013 in L'Escalier Quick overview of Linux & the LUGM With a little bit of delay the LUGM meeting officially started with a quick overview and introduction to Linux presented by Avinash. During the session he told the audience that there had been quite some activity over the island some years ago but unfortunately it had been quiet during recent times. Of course, we also spoke about the acknowledged world dominance of Linux - thanks to Android - and the interesting possibilities for countries like Mauritius. It is known that a couple of public institutions have there back-end infrastructure running on Red Hat Linux systems but the presence on the desktop is still very low. Users are simply hanging on to Windows XP and older versions of Microsoft Office. Following the introduction of the LUGM Ajay joined into the session and it quickly changed into a panel discussion with lots of interesting questions and answers, sharing of first-hand experience either on the job or in private use of Linux, and a couple of ideas about how the LUGM could promote Linux a bit more in Mauritius. It was great to get an insight into other attendee's opinion and activities. Especially taking into consideration that I'm already using Linux since around 1996/97. Frankly speaking, I bought a SuSE 4.x distribution back in those days because I couldn't achieve certain tasks on Windows NT 4.0 without spending a fortune. OpenELEC Mediacenter Next, Selven gave us decent introduction on OpenELEC: Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution built from scratch as a platform to turn your computer into an XBMC media center. OpenELEC is designed to make your system boot fast, and the install is so easy that anyone can turn a blank PC into a media machine in less than 15 minutes. I didn't know about it until this presentation. In the past, I was mainly attached to Video Disk Recorder (VDR) as it allows the use of satellite receiver cards very easily. Hm, somehow I'm still missing my precious HTPC that I had to leave back in Germany years ago. It was great piece of hardware and software; self-built PC in a standard HiFi-sized (43cm) black desktop casing with 2 full-featured Hauppauge DVB-s cards, an old-fashioned Voodoo graphics card, WiFi card, Pioneer slot-in DVD drive, and fully remote controlled via infra-red thanks to Debian, VDR and LIRC. With EP Guide, scheduled recordings and general multimedia centre it offered all the necessary comfort in the living room, besides a Nintendo game console; actually a GameCube at that time... But I have to admit that putting OpenELEC on a Raspberry Pi would be a cool DIY project in the near future. LUGM - our next generation of linux users (15.06.2013) Project Evil Genius (PEG) Don't be scared of the paragraph header. Ish gave us a cool explanation why he named it PEG - Project Evil Genius; it's because of the time of the day when he was scripting down his ideas to be able to build, package and provide software applications to various Linux distributions. The main influence came from openSuSE but the platform didn't cater for his needs and ideas, so he started to work out something on his own. During his passionate session he also talked about the amazing experience he had due to other Linux users from all over the world. During the next couple of days Ish promised to put his script to GitHub... Looking forward to that. Check out Ish's personal blog over at hacklog.in. Highly recommended to read. Why India? Simply because the registration fees per year for an Indian domain are approximately 20 times less than for a Mauritian domain (.mu). Exploring the beach of L'Escalier af the meeting 'After-party' at the beach of L'Escalier Puh, after such interesting sessions, ideas around Linux and good conversation during the breaks and over lunch it was time for a little break-out. Selven suggested that we all should head down to the beach of L'Escalier and get some impressions of nature down here in the south of the island. Talking about 'beach' ;-) - absolutely not comparable to the white-sanded ones here in Flic en Flac... There are no lagoons down at the south coast of Mauriitus, and watching the breaking waves is a different experience and joy after all. Unfortunately, I was a little bit worried about the thoughtless littering at such a remote location. You have to drive on natural paths through the sugar cane fields and I was really shocked by the amount of rubbish lying around almost everywhere. Sad, really sad and it concurs with Yasir's recent article on the same topic. Resumé & outlook It was a great event. I met with new people, had some good conversations, and even my children enjoyed themselves the whole day. The location was well-chosen, enough space for each and everyone, parking spaces and even a playground for the children. Also, a big "Thank You" to Selven and his helpers for the organisation and preparation of lunch. I'm kind of sure that this was an exceptional meeting of LUGM and I'm really looking forward to the next gathering of Linux geeks. Hopefully, soon. All images are courtesy of Avinash Meetoo. More pictures are available on Flickr.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Calculated Fields - Idiosyncracies

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Calculated Fields and some of their Idiosyncrasies Did you try to write a calculate field formula directly into the screen? Good Luck – You’ll need it! Calculated Fields are a sophisticated OOB feature of SharePoint, so you could think that they are best left to the end users – at least to the power users. But they reach their limits before the “Professionals “do, and the tough ones come back to us anyway. Back to business; the simpler the formula, the easier it is. Still, use your favorite editor to write it, then cut it and paste it to the ridiculously small window. What about complex formulae? Write them in steps! Here is a case in point and an idiosyncrasy or two. Our welders need to be certified and recertified every two years. Some of them are certifiable…., but I digress. To be certified you need to pass an eye exam, and two more tests – test A and test B. for each of those you have an expiry date. When renewed, each expiry date is advanced by two years from the date of renewal. My users wanted a visual clue so that when the supervisor looks at the list, she’ll have a KPI symbol telling her if anything expired (Red), is going to expire within the next 90 days (Yellow) or is not to be worried about (green). Not all the dates are filled and any blank date implies a complete lack of certification in the particular requirement. Obviously, I needed to figure the minimal of these 3 dates – a simple enough formula: =MIN([Date_EyeExam], {Date_TestA], [Date_TestB]). Aha! Here is idiosyncrasy #1. When one of the dates is a null, MIN(Date1, Date2) returns the non null date. Null is construed as “Far, far away”. The funny thing is that when you compare it to Today, the null is the lesser one. So a null it is less than today, but not when MIN is calculated. Now, to me the fact that the welder does not have an exam date, is synonymous with his exam being prehistoric, or at least past due. So here is what I did: Solution: Let’s set a blank date to 1/1/1800. How will we do that? Use the IF. IF([Field] rel relValue, TrueValue, FalseValue). rel is any relationship operator <, >, <=, >=, =, <>. If the field is related to the relValue as prescribed, the “IF” returns the TrueValue, otherwise it returns the FalseValue. Thus: =IF([SomeDate]="",1/1/1800,[SomeDate]) will return 1/1/1800 if the date is blank and the date itself if not. So, using this formula, if the welder missed an exam, the returned exam date will be far in the past. It would be nice if we could take such a formula and make it into a reusable function. Alas, here is a calculated field serious shortcoming: You cannot write subs and functions!! Aha, but we can use interim calculated fields! So let’s create 3 calculated fields as follows: 1: c_DateTestA as a calculated field of the date type, with the formula:  IF([Date_TestA]="",1/1/1800,[Date_TestA]) 2: c_DateTestB as a calculated field of the date type, with the formula:  IF([Date_TestB]="",1/1/1800,[Date_TestB]) 3: c_DateEyeExam as a calculated field of the date type, with the formula:  IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam]) And now use these to get c_MinDate. This is again a calculated field of type date with the formula: MIN(c_DateTestA, cDateTestB, c_DateEyeExam) Note that I missed the square parentheses. In “properly named fields – where there are no embedded spaces, we don’t need the square parentheses. I actually strongly recommend using underscores in place of spaces in all the field names in your lists. Among other things, it makes using CAML much simpler. Now, we still need to apply the KPI to this minimal date. I am going to use the available KPI graphics that come with SharePoint and are always available in your 12 hive. "/_layouts/images/kpidefault-2.gif" is the Red KPI "/_layouts/images/kpidefault-1.gif" is the Yellow KPI "/_layouts/images/kpidefault-0.gif" is the Green KPI And here is the nested IF formula that will do the trick: =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif", IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Nice! BUT when I tested, it did not work! This is Idiosyncrasy #2: A calculated field based on a calculated field based on a calculated field does not work. You have to stop at two levels! Back to the drawing board: We have to reduce by one level. How? We’ll eliminate the c_DateX items in the formula and replace them with the proper IF formulae. Notice that this needs to be done with precision. You are much better off in doing it in your favorite line editor, than inside the cramped space that SharePoint gives you. So here is the result: MIN(IF([Date_TestA]="",1/1/1800,[ Date_TestA]), IF([Date_TestB]="",1/1/1800,[ Date_TestB]), 1/1/1800), IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam])) Note that I bolded the parentheses and painted them red. They have to match for this formula to work. Now we can leave the KPI formula as is and test again. This time with SUCCESS! Conclusion: build the inner functions first, and then embed them inside the outer formulae. Do this as long as necessary. Use your favorite line editor. Limit yourself to 2 levels. That’s all folks! Almost! As soon as I finished doing all of the above, my users added yet another level of complexity. They added another test, a test that must be passed, but never expires and asked for yet another KPI, this time in Black to denote that any test is not just past due, but altogether missing. I just finished this. Let’s hope it ends here! And OH, the formula  =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif",IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Deals with “Today” and this is a subject deserving a discussion of its own!  That’s all folks?! (and this time I mean it)

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • Can't log in via SSH to any accounts set to use /bin/bash as a default shell

    - by Gui Ambros
    I'm trying to install bash as the default shell on a ARM Linux running on an embedded device (Synology DS212+ NAS). But there's something really wrong, and I can't figure out what it is. Symptoms: 1) Root has /bin/bash as default shell, and can log in normally via SSH: $ grep root /etc/passwd root:x:0:0:root:/root:/bin/bash $ ssh root@NAS root@NAS's password: Last login: Sun Dec 16 14:06:56 2012 from desktop # 2) joeuser has /bin/bash as default shell, and receives "Permission denied" when trying to log in via SSH: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/bash $ ssh joeuser@localhost joeuser@NAS's password: Last login: Sun Dec 16 14:07:22 2012 from desktop Permission denied, please try again. Connection to localhost closed. 3) changing joeuser's shell back to /bin/sh: $ grep joeuser /etc/passwd joeuser:x:1029:100:Joe User:/home/joeuser:/bin/sh $ ssh joeuser@localhost Last login: Sun Dec 16 15:50:52 2012 from localhost $ To make things even more strange, I can log in as joeuser using /bin/bash using the serial console (!). Also a su - joeuser as root works fine, so the bash binary itself is working fine. In an act of despair, I changed joeuser's uid to 0 on /etc/passwd, but also didn't work, so it doesn't seem to be anything permission related. Seems that bash is doing some extra checking that sshd didn't like, and blocking the connections for non-root users. Maybe some sort of sanity checking - or terminal emulation - that is triggering the SIGCHLD, but only when called via ssh. I already went through every single item on sshd_config, and also put SSHD in debug mode, but didn't find anything strange. Here's my /etc/ssh/sshd_config: LogLevel DEBUG LoginGraceTime 2m PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys ChallengeResponseAuthentication no UsePAM yes AllowTcpForwarding no ChrootDirectory none Subsystem sftp internal-sftp -f DAEMON -u 000 And here's the output from /usr/syno/sbin/sshd -d, showing the failed attempt of joeuser trying to log in, with /bin/bash as the shell: debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: HPN Buffer Size: 87380 debug1: sshd version OpenSSH_5.8p1-hpn13v11 debug1: read PEM private key done: type RSA debug1: private host key: #0 type 1 RSA debug1: read PEM private key done: type DSA debug1: private host key: #1 type 2 DSA debug1: read PEM private key done: type ECDSA debug1: private host key: #2 type 3 ECDSA debug1: rexec_argv[0]='/usr/syno/sbin/sshd' debug1: rexec_argv[1]='-d' Set /proc/self/oom_adj from 0 to -17 debug1: Bind to port 22 on ::. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on :: port 22. debug1: Bind to port 22 on 0.0.0.0. debug1: Server TCP RWIN socket size: 87380 debug1: HPN Buffer Size: 87380 Server listening on 0.0.0.0 port 22. debug1: Server will not fork when running in debugging mode. debug1: rexec start in 6 out 6 newsock 6 pipe -1 sock 9 debug1: inetd sockets after dupping: 4, 4 Connection from 127.0.0.1 port 52212 debug1: HPN Disabled: 0, HPN Buffer Size: 87380 debug1: Client protocol version 2.0; client software version OpenSSH_5.8p1-hpn13v11 SSH: Server;Ltype: Version;Remote: 127.0.0.1-52212;Protocol: 2.0;Client: OpenSSH_5.8p1-hpn13v11 debug1: match: OpenSSH_5.8p1-hpn13v11 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8p1-hpn13v11 debug1: permanently_set_uid: 1024/100 debug1: MYFLAG IS 1 debug1: list_hostkey_types: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: AUTH STATE IS 0 debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: client->server aes128-ctr hmac-md5 none SSH: Server;Ltype: Kex;Remote: 127.0.0.1-52212;Enc: aes128-ctr;MAC: hmac-md5;Comp: none debug1: REQUESTED ENC.NAME is 'aes128-ctr' debug1: kex: server->client aes128-ctr hmac-md5 none debug1: expecting SSH2_MSG_KEX_ECDH_INIT debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user joeuser service ssh-connection method none SSH: Server;Ltype: Authname;Remote: 127.0.0.1-52212;Name: joeuser debug1: attempt 0 failures 0 debug1: Config token is loglevel debug1: Config token is logingracetime debug1: Config token is permitrootlogin debug1: Config token is rsaauthentication debug1: Config token is pubkeyauthentication debug1: Config token is authorizedkeysfile debug1: Config token is challengeresponseauthentication debug1: Config token is usepam debug1: Config token is allowtcpforwarding debug1: Config token is chrootdirectory debug1: Config token is subsystem debug1: PAM: initializing for "joeuser" debug1: PAM: setting PAM_RHOST to "localhost" debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user joeuser service ssh-connection method password debug1: attempt 1 failures 0 debug1: do_pam_account: called Accepted password for joeuser from 127.0.0.1 port 52212 ssh2 debug1: monitor_child_preauth: joeuser has been authenticated by privileged process debug1: PAM: establishing credentials User child is on pid 9129 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 65536 max 16384 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request pty-req reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req pty-req debug1: Allocating pty. debug1: session_new: session 0 debug1: session_pty_req: session 0 alloc /dev/pts/1 debug1: server_input_channel_req: channel 0 request shell reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req shell debug1: Setting controlling tty using TIOCSCTTY. debug1: Received SIGCHLD. debug1: session_by_pid: pid 9130 debug1: session_exit_message: session 0 channel 0 pid 9130 debug1: session_exit_message: release channel 0 debug1: session_by_tty: session 0 tty /dev/pts/1 debug1: session_pty_cleanup: session 0 release /dev/pts/1 Received disconnect from 127.0.0.1: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials Here you have the full output of sshd -dd, together with ssh -vv. Bash: # bash --version GNU bash, version 3.2.49(1)-release (arm-none-linux-gnueabi) Copyright (C) 2007 Free Software Foundation, Inc. The bash binary was cross compiled from source. I also tried using a pre-compiled binary from the Optware distribution, but had the exact same problem. I checked for missing shared libraries using objdump -x, but they're all there. Any ideas what could be causing this "Permission denied, please try again."? I'm almost diving in the bash source code to investigate, but trying to avoid hours chasing something that may be silly.

    Read the article

  • Mount TMPFS instead of ro /dev

    - by schiggn
    I am working on a ARM-Based embedded system with a custom Debian Linux based on kernel 2.6.31. In the final system, the Root file system is stored as squashfs on flash. Now, the folder /dev is created by udev, but since there is no hot plugging functionality needed and booting time is critical, I wanted to delete udev and "hard code" the /dev folder (read here, page 5). because i still need to change parameters of the devices (with ioctl /sysfs) this does not work for me in this case. so i thought of mounting a tmpfs on /dev and change the parameters there. is this possible? and how to do best? my approach would be: delete /dev from RFS create tar containing basic devices mount tmpfs /dev untar tar-file into /dev change parameters Could this work? Do you see any problems? I found out, that you can mount on top of already mounted mount point, is it somehow possible just to take data with while mounting the new file system? if so that would be very convenient! Thanks Update: I just tried that out, but I'm stuck at a certain point. I packed all my devices into devices.tar, packed it into /usr of my squashfs and added the following lines to mountkernfs.sh, which is executed right after INIT. #mount /dev on tmpfs echo -n "Mounting /dev on tmpfs..." mount -o size=5M,mode=0755 -t tmpfs tmpfs /dev mknod -m 600 /dev/console c 5 1 mknod -m 600 /dev/null c 1 3 echo "done." echo -n "Populating /dev..." tar -xf /usr/devices.tar -C /dev echo "done." This works fine on the version over NFS, if I place printf's in the code, I can see it executing, if I comment out the extracting part, its complaining about missing devices. Booting OK mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. System Clock set to: Thu Sep 13 11:26:23 UTC 2012. INIT: Entering runlevel: 2 UBI: attaching mtd8 to ubi0 Commenting out the extraction of the tar mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 IP-Config: Unable to set interface netmask (-22). Looking up port of RPC 100003/2 on 192.168.1.234 Looking up port of RPC 100005/1 on 192.168.1.234 VFS: Mounted root (nfs filesystem) on device 0:14. Freeing init memory: 136K INIT: version 2.86 booting Mounting /dev on tmpfs...done. Populating /dev...done. Initializing /var...done. Setting the system clock. Cannot access the Hardware Clock via any known method. Use the --debug option to see the details of our search for an access method. Unable to set System Clock to: Thu Sep 13 12:24:00 UTC 2012 ... (warning). INIT: Entering runlevel: 2 libubi: error!: cannot open "/dev/ubi_ctrl" So far so good. But if I pack the whole story into a squashfs and boot from there, it is acting strange. It's telling me while booting that it is unable to open an initial console and its throwing errors on mounting the UBIFS devices, but finally provides a login anyway. Over that my echo's are not executed. If I then log in, /dev is mounted as TMPFS as desired and all the devices reside inside. When I redo the "mount" command to mount the UBIFS partitions it is executed whitout problem and useable. From squashfs VFS: Mounted root (squashfs filesystem) readonly on device 31:15. Freeing init memory: 136K Warning: unable to open an initial console. mmc0: new high speed SDHC card at address 0007 mmcblk0: mmc0:0007 SD04G 3.67 GiB mmcblk0: p1 UBIFS error (pid 484): ubifs_get_sb: cannot open "ubi1_0", error -19 Additionally, a part of the rest of the bootscripts is still exexuted, but not all of them. Does anyone has a clue why? Other question, is 5MB enough/too much for /dev?

    Read the article

  • CISCO 2911 Router configuration

    - by bala
    Device cisco 2911 router configuration support is required please. I have exchange server 2010 configured and working without any errors the problem is in cisco router configuration when exchange server sends emails out the receives WAN IP not the public ip. I have configured RDNS lookups with our MX record IP addesses that match the FQDN but all our emails are rejected because it does not match with the public ip. Receiving mails problem is not an problem all mails are coming through. i am sure i am missing something on the router configuration that does not sends the public ip, can any one help me to solve this issue. Note; I've got 1 WAN IP & 8 Public IP from ISP . Find below the running configuration. Building configuration... Current configuration : 2734 bytes ! ! Last configuration change at 06:32:13 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 ! NVRAM config last updated at 06:32:14 UTC Tue Apr 3 2012 version 15.1 service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname BSBG-LL ! boot-start-marker boot-end-marker ! ! enable secret 5 $x$xHrxxxxx5ox0 enable password 7 xx23xx5FxxE1xx044 ! no aaa new-model ! no ipv6 cef ip source-route ip cef ! ! ! ! ! ip flow-cache timeout active 1 ip domain name yourdomain.com ip name-server 213.42.20.20 ip name-server 195.229.241.222 multilink bundle-name authenticated ! ! crypto pki token default removal timeout 0 ! ! license udi pid CISCO2911/K9 ! ! username bsbg ! ! ! ! ! ! interface Embedded-Service-Engine0/0 no ip address shutdown ! interface GigabitEthernet0/0 ip address 192.168.0.9 255.255.255.0 ip flow ingress ip nat inside ip virtual-reassembly in duplex auto speed 100 no cdp enable ! interface GigabitEthernet0/1 ip address 213.42.xx.x2 255.255.255.252 ip nat outside ip virtual-reassembly in duplex auto speed auto no cdp enable ! interface GigabitEthernet0/2 no ip address shutdown duplex auto speed auto ! ip forward-protocol nd ! no ip http server no ip http secure-server ! ip nat inside source list 120 interface GigabitEthernet0/1 overload ip nat inside source static tcp 192.168.0.4 25 94.56.89.100 25 extendable ip nat inside source static tcp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static udp 192.168.0.4 53 94.56.89.100 53 extendable ip nat inside source static tcp 192.168.0.4 110 94.56.89.100 110 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.100 443 extendable ip nat inside source static tcp 192.168.0.4 587 94.56.89.100 587 extendable ip nat inside source static tcp 192.168.0.4 995 94.56.89.100 995 extendable ip nat inside source static tcp 192.168.0.4 3389 94.56.89.100 3389 extendable ip nat inside source static tcp 192.168.0.4 443 94.56.89.101 443 extendable ip nat inside source static tcp 192.168.0.12 80 94.56.89.102 80 extendable ip nat inside source static tcp 192.168.0.12 443 94.56.89.102 443 extendable ip nat inside source static tcp 192.168.0.12 3389 94.56.89.102 3389 extendable ip route 0.0.0.0 0.0.0.0 213.42.69.41 ! access-list 120 permit ip 192.168.0.0 0.0.0.255 any ! ! ! control-plane ! ! ! line con 0 exec-timeout 5 0 line aux 0 line 2 no activation-character no exec transport preferred none transport input all transport output pad telnet rlogin lapb-ta mop udptn v120 ssh stopbits 1 line vty 0 4 password 7 xx64xxD530D26086Dxx login transport input all ! scheduler allocate 20000 1000 end

    Read the article

  • Managing BES Software Configurations

    - by DaveJohnston
    Hi, I am having problems with OTA deployment of a bespoke application that we have written. I have read loads of threads elsewhere and I have got mixed help, but for my particular case none of it has really helped. So I thought I would explain my exact situation and try and get some help here. I am running BES version 4.1.5 (Bundle 79) for Microsoft Exchange. The application we have written is split into 5 modules, which we control, and another 4 modules which are 3rd party libraries that we require. So for our modules the version numbers are regularly changing but for the others they are pretty much always going to remain the same. We have an alx file set up that identifies all of the files required and in fact I am able to create a software configuration and deploy the application with no problems. What I am trying to do however is maintain multiple versions of our application on the BES and be able to select which version I want to deploy to each user. I have tried this a number of ways (as I said I have read lots of other threads with solutions to this problem) but each seems to come with its own problem. First of all I tried just creating different configurations for each version of the application, but because they each had the same application ID the BES informed me that I couldn't do this. I read somewhere that the solution was to create a second shared folder (e.g. \Program Files\Common Files\RIM) and add the apploader stuff and the new version of the app to this folder. I could then create a second software configuration that would have the same application ID. The result of this seemed promising to start with. When I changed the config that was assigned to a user the new version was pushed out fine. But afterwards the BES reported that the device state was invalid, which meant I couldn't push anything else until I reactivated the device. I guess this is because the first config was never set to disallowed so the old version wasn't removed and the device essentially reported that it had multiple versions of the same application installed. The next suggestion I got was to change the application ID for each version, e.g. to include the version number. This meant that each version of the application could be included in a single configuration and I could set one to disallowed and the other to required. Initially this worked and the first version was deployed. But when I switched (i.e. the old version became disallowed and the new version required) the BES reported upgrade required and removed the old version. The device restarts and the old version is gone but the new version is not pushed out. I checked the BES and it still said Upgrade Required. I checked the log files and found: [40000] (11/12 09:50:27.397):{0xEB8} {[email protected], PIN=1234, UserId=2}SCS::PollDBQueueNewRequests - Queuing POLL_FOR_MISSING_APPS request [40000] (11/12 09:50:28.241):{0xE9C} RequestHandler::PollForMissingApps: Starting Poll For Missing Apps. [40304] (11/12 09:50:28.241):{0xE90} WorkerThreadPool:: ThreadProc(): Thread released with empty queue [40000] (11/12 09:50:28.241):{0xE9C} SCS::RemoveAppDeliveryRequests - No App Delivery Requests purged for User id 2 [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [30000] (11/12 09:50:28.960):{0xE9C} Discard duplicate module group "name" on device [40000] (11/12 09:50:29.163):{0xE9C} RequestHandler::PollForMissingApps: Completed Poll For Missing Apps, elapsed time 0.922 seconds. (You will notice I have removed actual names and email addresses etc for privacy reasons. But one question: where does the name of the module group come from? In my case it is close to the application ID but doesn't include the version number that I added at the end in order to get it to work. Is that information embedded in a COD file or something??) So it is reporting a duplicate module group on the device? What does this mean? I checked the device properties (as reported on the BES) and it confirms that the modules with the old version numbers are still present on the device. So the application has been removed but not the modules?? I checked the device and the modules are gone, so it is just the BES reporting that they are still there?? I checked the database and it has the modules in questions in the SyncDeviceMgmt table. If I delete these from the DB the BES changes to report Install Required, and low and behold the new version of the app is pushed out. So at the end of all that, my question is: does anyone have any other suggestions of how to handle upgrading our bespoke application OTA from the BES? Or can anyone point out something I am doing wrong in what I described above that might solve the problems I am having? I guess the question is why does the database maintain that the modules are on the device after they are removed? Thanks for any help you can provide.

    Read the article

  • Bluetooth RFCOMM / SDP connection to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77_Commands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect() method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine. The below is merged from a similar question which was successfully resolved by the selected answer here: How can one connect to an rfcomm device other than another phone in Android? The Android API provides examples of using listenUsingRfcommWithServiceRecord() to set up a socket and createRfcommSocketToServiceRecord() to connect to that socket. I'm trying to connect to an embedded device with a BlueSMiRF Gold chip. My working Python code (using the PyBluez library), which I'd like to port to Android, is as follows: sock = bluetooth.BluetoothSocket(proto=bluetooth.RFCOMM) sock.connect((device_addr, 1)) return sock.makefile() ...so the service to connect to is simply defined as channel 1, without any SDP lookup. As the only documented mechanism I see in the Android API does SDP lookup of a UUID, I'm slightly at a loss. Using "sdptool browse" from my Linux host comes up empty, so I surmise that the chip in question simply lacks SDP support.

    Read the article

  • ASP.NET 3.5/C# Menu Control in Master Page fails to use CSS styles

    - by Shaun
    I'm working on a web application that uses ASP.NET 3.5 and C#. Structurally, I have a master page with a menu control on it. The control serves as my navigation, and it gets its items from a SiteMapDataSource control and a corresponding Web.sitemap file. The problem is that some styles do not render properly when you specify the CssClass property. More specifically, the selected and hover styles don't respond to css styles. Consider the code below: <%@ Master Language="C#" AutoEventWireup="true" CodeFile="Site.master.cs" Inherits="Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.or/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>A webpage</title> </head> <body> <form id="form1" runat="server"> <div id="page"> <asp:Menu ID="navMenu" Orientation="Horizontal" StaticMenuStyle-CssClass="staticMenu" StaticMenuItemStyle-CssClass="staticMenuItem" StaticSelectedStyle-CssClass="staticSelectedItem" StaticHoverStyle-CssClass="staticHoverItem" runat="server"> </asp:Menu> <asp:SiteMapDataSource ID="srcSiteMap" runat="server" ShowStartingNode="false" /> <br /> <asp:ContentPlaceHolder id="ContentPlaceHolder1" runat="server"> </asp:ContentPlaceHolder> </div> </form> </body> </html> Suppose I had a corresponding .css file with the following: .staticMenuItem { background-color:Red; } .staticSelectedItem { background-color:Green; } .staticHoverItem { background-color:Blue; } What will happen is that my item backgrounds will properly be red, but my selected item will not be green and the item I'm hovering my mouse over will not be blue. This seems true regardless of whether or not I include the style in the head of the master page or in an external file in default theme as specified in the web.config file. If I specify the styles in the asp.net xml like so: <asp:Menu ID="navMenu" Orientation="Horizontal" runat="server"> <StaticSelectedStyle BackColor="Green" Font-Underline="True" Font-Bold="True" /> <StaticHoverStyle BackColor="Gray" /> </asp:Menu> It appears to work properly in Firefox, but the style is never embedded in the html in Internet Explorer. Odd. Does anybody have any insight into what is causing this problem and how to neatly work around it? I'm aware I might be able to programmically determine the current page and select the corresponding menu item manually so it receives the proper style class, but before I resort to hacking C# and Javascript together to fix this functionality, I'm open to ideas. Thanks!

    Read the article

  • "Access is denied" JavaScript error when trying to access the document object of a programmatically-

    - by Bungle
    I have project in which I need to create an <iframe> element using JavaScript and append it to the DOM. After that, I need to insert some content into the <iframe>. It's a widget that will be embedded in third-party websites. I don't set the "src" attribute of the <iframe> since I don't want to load a page; rather, it is used to isolate/sandbox the content that I insert into it so that I don't run into CSS or JavaScript conflicts with the parent page. I'm using JSONP to load some HTML content from a server and insert it in this <iframe>. I have this working fine, with one serious exception - if the document.domain property is set in the parent page (which it may be in certain environments in which this widget is deployed), Internet Explorer (probably all versions, but I've confirmed in 6, 7, and 8) gives me an "Access is denied" error when I try to access the document object of this <iframe> I've created. It doesn't happen in any other browsers I've tested in (all major modern ones). This makes some sense, since I'm aware that Internet Explorer requires you to set the document.domain of all windows/frames that will communicate with each other to the same value. However, I'm not aware of any way to set this value on a document that I can't access. Is anyone aware of a way to do this - somehow set the document.domain property of this dynamically created <iframe>? Or am I not looking at it from the right angle - is there another way to achieve what I'm going for without running into this problem? I do need to use an <iframe> in any case, as the isolated/sandboxed window is crucial to the functionality of this widget. Here's my test code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Document.domain Test</title> <script type="text/javascript"> document.domain = 'onespot.com'; // set the page's document.domain </script> </head> <body> <p>This is a paragraph above the &lt;iframe&gt;.</p> <div id="placeholder"></div> <p>This is a paragraph below the &lt;iframe&gt;.</p> <script type="text/javascript"> var iframe = document.createElement('iframe'), doc; // create <iframe> element document.getElementById('placeholder').appendChild(iframe); // append <iframe> element to the placeholder element setTimeout(function() { // set a timeout to give browsers a chance to recognize the <iframe> doc = iframe.contentWindow || iframe.contentDocument; // get a handle on the <iframe> document alert(doc); if (doc.document) { // HEREIN LIES THE PROBLEM doc = doc.document; } doc.body.innerHTML = '<h1>Hello!</h1>'; // add an element }, 10); </script> </body> </html> I've hosted it at: http://troy.onespot.com/static/access_denied.html As you'll see if you load this page in IE, at the point that I call alert(), I do have a handle on the window object of the <iframe>; I just can't get any deeper, into its document object. Thanks very much for any help or suggestions! I'll be indebted to whomever can help me find a solution to this.

    Read the article

  • Errors rose when a Netbean Maven Project tries to run

    - by Zakaria
    Hi everybody, I installed NetBeans 6.8 on Vista and and I'm trying to execute a simple Maven Project. When I ran the project, I got this set of errors: WARNING: You are running embedded Maven builds, some build may fail due to incompatibilities with latest Maven release. To set Maven instance to use for building, click here. Scanning for projects... [#process-resources] [resources:resources] Using default encoding to copy filtered resources. [#compile] [compiler:compile] Nothing to compile - all classes are up to date [exec:exec] [EL Info]: 2010-04-04 18:22:54.907--ServerSession(15532856)--EclipseLink, version: Eclipse Persistence Services - 2.0.0.v20091127-r5931 [EL Severe]: 2010-04-04 18:22:54.929--ServerSession(15532856)--Local Exception Stack: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception in thread "main" javax.persistence.PersistenceException: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. at org.eclipse.persistence.exceptions.DatabaseException.configurationErrorClassNotFound(DatabaseException.java:82) at org.eclipse.persistence.sessions.DefaultConnector.loadDriverClass(DefaultConnector.java:267) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:85) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:392) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at com.mycompany.chapter2_ex1.Main.main(Main.java:31) Caused by: Exception [EclipseLink-4003] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.DatabaseException Exception Description: Configuration error. Class [org.apache.derby.jdbc.ClientDriver] not found. at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) at org.eclipse.persistence.exceptions.DatabaseException.configurationErrorClassNotFound(DatabaseException.java:82) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.getServerSession(EntityManagerFactoryImpl.java:151) at org.eclipse.persistence.sessions.DefaultConnector.loadDriverClass(DefaultConnector.java:267) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManagerImpl(EntityManagerFactoryImpl.java:207) at org.eclipse.persistence.sessions.DefaultConnector.connect(DefaultConnector.java:85) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryImpl.createEntityManager(EntityManagerFactoryImpl.java:195) at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162) at com.mycompany.chapter2_ex1.Main.main(Main.java:31) at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584) at org.eclipse.persistence.internal.jpa.EntityManagerFactoryProvider.login(EntityManagerFactoryProvider.java:228) at org.eclipse.persistence.internal.jpa.EntityManagerSetupImpl.deploy(EntityManagerSetupImpl.java:368) ... 4 more [ERROR]The following mojo encountered an error while executing: [ERROR]Group-Id: org.codehaus.mojo [ERROR]Artifact-Id: exec-maven-plugin [ERROR]Version: 1.1.1 [ERROR]Mojo: exec [ERROR]brought in via: Direct invocation [ERROR]While building project: [ERROR]Group-Id: com.mycompany [ERROR]Artifact-Id: chapter2_ex1 [ERROR]Version: 1.0-SNAPSHOT [ERROR]From file: C:\Users\Charlotte\Documents\NetBeansProjects\chapter2_ex1\pom.xml [ERROR]Reason: Result of cmd.exe /X /C ""C:\Program Files\Java\jdk1.6.0_11\bin\java.exe" -classpath C:\Users\Charlotte\Documents\NetBeansProjects\chapter2_ex1\target\classes;C:\Users\Charlotte\.m2\repository\javax\persistence\persistence-api\1.0\persistence-api-1.0.jar;C:\Users\Charlotte\.m2\repository\org\eclipse\persistence\javax.persistence\2.0.0\javax.persistence-2.0.0.jar;C:\Users\Charlotte\.m2\repository\org\eclipse\persistence\eclipselink\2.0.0-RC1\eclipselink-2.0.0-RC1.jar com.mycompany.chapter2_ex1.Main" execution is: '1'. ------------------------------------------------------------------------ For more information, run with the -e flag ------------------------------------------------------------------------ BUILD FAILED ------------------------------------------------------------------------ Total time: 3 seconds Finished at: Sun Apr 04 18:22:55 CEST 2010 Final Memory: 47M/94M ------------------------------------------------------------------------ Theses exceptions rose even if I can run the database by using the console (ij) and when I connect the Database, no errors are showing. Can you help me please? Thank you very much. Regards.

    Read the article

  • How to create an array of User Objects in Powerbuilder?

    - by TomatoSandwich
    The application has many different windows. One is a single 'row' window, which relates to a single row of data in a table, say 'Order'. Another is a 'multiple row' datawindow, where each row in the datawindow relates to a row in 'Order', used for spreadsheet-like data entry Functionality extentions have create a detail table, say 'Suppliers', where an order may require multiple suppliers to fill the order. Normally, suppliers are not required, because they are already in the warehouse (0), or there may need to be an order to a supplier to complete an order (1), or multiple suppliers may need to be contacted (more than one). As a single order is entered, once the items are entered, a User Object is populated depending on the status of the items in the warehouse. If required, this creates a 1-to-many relationship between the order and the "backorder". In the PB side, there is a single object uo_backorder which is created on the window, and is referenced by the window depending on the command (button popup, save, etc) I have been tasked to create the 'backorder' functionality on the spreadsheet-line window. Previously the default options for backorders were used when orders were created from the multiple-row window. A workaround already exists where unconfirmed orders could be opened in the single-row window, and the backorder information manipulated there. However, the userbase wants this functionality on the one window. Since the functionality of uo_backorder already exists, I assumed I could just copy the code from the single-order window, but create an array of uo_backorder objects to cope with multiple rows. I tried the following: forward .. type uo_backorder from popupdwpb within w_order_conv end type end forward global type w_order_conv from singleform .. uo_backorder uo_backorder end type type variables .. uo_backorder iuo_backorders[] end variables .. public function boolean iuo_backorders(); .. long ll_count ll_count = UpperBound(iuo_backorders[]) iuo_backorders[ll_count+1] = uo_backorder //THIS ISN'T RIGHT lb_ok = iuo_backorders[ll_count+1].init('w_backorder_popup', '', '', '', 'd_backorder_popup', sqlca, useTransObj()) return lb_ok end function .. <utility functions> .. type uo_backorder from popupdwpb within w_order_conv integer x = 28 integer y = 28 integer width ... end type on uo_backorder.destroy call popupdwpb::destroy end on The issue I face now is that the code commented "THIS ISN'T RIGHT" isn't correct. It is associating the visual object placed on the face of the main window to each array cell, so anytime I reference the array cell object it's actually referencing the one original object, not the new instances that I (thought) I was creating. If I change the code iuo_backorders[ll_count+1] = create uo_backorder the code doesn't run, saying that it failed to initalize the popup window. I think this is related to the class being called the same thing as the instance. What I want to end up with is an array of uo_backorder objects that I can associate to each row of my datawindow (first row = first cell, etc). I think the issue lays in the fact it's a visual object, and I can't seem to get the window to run without adding a dummy object on the face of the window (functionality from the original single-row window). Since it's a VISUAL object, does the object indeed need to be embedded on the windowface for the window to know what object I'm talking about? If so, how does one create multiple windowface objects (one to many, depending on when a row is added)? Don't hesitate to inquire regarding any more information this issue may require from myself. I have no idea what is 'standard' or 'default' in PB, or what is custom and needs more explaining.

    Read the article

  • How to pre-load all deployed assemblies for an AppDomain

    - by Andras Zoltan
    Given an App Domain, there are many different locations that Fusion (the .Net assembly loader) will probe for a given assembly. Obviously, we take this functionality for granted and, since the probing appears to be embedded within the .Net runtime (Assembly._nLoad internal method seems to be the entry-point when Reflect-Loading - and I assume that implicit loading is probably covered by the same underlying algorithm), as developers we don't seem to be able to gain access to those search paths. My problem is that I have a component that does a lot of dynamic type resolution, and which needs to be able to ensure that all user-deployed assemblies for a given AppDomain are pre-loaded before it starts its work. Yes, it slows down startup - but the benefits we get from this component totally outweight this. The basic loading algorithm I've already written is as follows. It deep-scans a set of folders for any .dll (.exes are being excluded at the moment), and uses Assembly.LoadFrom to load the dll if it's AssemblyName cannot be found in the set of assemblies already loaded into the AppDomain (this is implemented inefficiently, but it can be optimized later): void PreLoad(IEnumerable<string> paths) { foreach(path p in paths) { PreLoad(p); } } void PreLoad(string p) { //all try/catch blocks are elided for brevity string[] files = null; files = Directory.GetFiles(p, "*.dll", SearchOption.AllDirectories); AssemblyName a = null; foreach (var s in files) { a = AssemblyName.GetAssemblyName(s); if (!AppDomain.CurrentDomain.GetAssemblies().Any( assembly => AssemblyName.ReferenceMatchesDefinition( assembly.GetName(), a))) Assembly.LoadFrom(s); } } LoadFrom is used because I've found that using Load() can lead to duplicate assemblies being loaded by Fusion if, when it probes for it, it doesn't find one loaded from where it expects to find it. So, with this in place, all I now have to do is to get a list in precedence order (highest to lowest) of the search paths that Fusion is going to be using when it searches for an assembly. Then I can simply iterate through them. The GAC is irrelevant for this, and I'm not interested in any environment-driven fixed paths that Fusion might use - only those paths that can be gleaned from the AppDomain which contain assemblies expressly deployed for the app. My first iteration of this simply used AppDomain.BaseDirectory. This works for services, form apps and console apps. It doesn't work for an Asp.Net website, however, since there are at least two main locations - the AppDomain.DynamicDirectory (where Asp.Net places it's dynamically generated page classes and any assemblies that the Aspx page code references), and then the site's Bin folder - which can be discovered from the AppDomain.SetupInformation.PrivateBinPath property. So I now have working code for the most basic types of apps now (Sql Server-hosted AppDomains are another story since the filesystem is virtualised) - but I came across an interesting issue a couple of days ago where this code simply doesn't work: the nUnit test runner. This uses both Shadow Copying (so my algorithm would need to be discovering and loading them from the shadow-copy drop folder, not from the bin folder) and it sets up the PrivateBinPath as being relative to the base directory. And of course there are loads of other hosting scenarios that I probably haven't considered; but which must be valid because otherwise Fusion would choke on loading the assemblies. I want to stop feeling around and introducing hack upon hack to accommodate these new scenarios as they crop up - what I want is, given an AppDomain and its setup information, the ability to produce this list of Folders that I should scan in order to pick up all the DLLs that are going to be loaded; regardless of how the AppDomain is setup. If Fusion can see them as all the same, then so should my code. Of course, I might have to alter the algorithm if .Net changes its internals - that's just a cross I'll have to bear. Equally, I'm happy to consider SQL Server and any other similar environments as edge-cases that remain unsupported for now. Any ideas!?

    Read the article

  • IE attachEvent on object tag causes memory corruption

    - by larswa
    I've an ActiveX Control within an embedded IE8 HTML page that has the following event MessageReceived([in] BSTR srcWindowId, [in] BSTR json). On Windows the event is registered with OCX.attachEvent("MessageReceived", onMessageReceivedFunc). Following code fires the event in the HTML page. HRESULT Fire_MessageReceived(BSTR id, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; CComVariant* pvars = new CComVariant[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1] = id; pvars[0] = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; // -> Memory Corruption here! return varResult.scode; } After I enabled gflags.exe with application verifier, the following strange behaviour occur: After Invoke() that is executing the JavaScript callback, the BSTR from pvars[1] is copied to pvars[0] for some unknown reason!? The delete[] of pvars causes a double free of the same string then which ends in a heap corruption. Does anybody has an idea whats going on here? Is this a IE bug or is there a trick within the OCX Implementation that I'm missing? If I use the tag like: <script for="OCX" event="MessageReceived(id, json)" language="JavaScript" type="text/javascript"> window.onMessageReceivedFunc(windowId, json); </script> ... the strange copy operation does not occur. The following code also seem to be ok due to the fact that the caller of Fire_MessageReceived() is responsible for freeing the BSTRs. HRESULT Fire_MessageReceived(BSTR srcWindowId, BSTR json) { CComVariant varResult; T* pT = static_cast<T*>(this); int nConnectionIndex; VARIANT pvars[2]; int nConnections = m_vec.GetSize(); for (nConnectionIndex = 0; nConnectionIndex < nConnections; nConnectionIndex++) { pT->Lock(); CComPtr<IUnknown> sp = m_vec.GetAt(nConnectionIndex); pT->Unlock(); IDispatch* pDispatch = reinterpret_cast<IDispatch*>(sp.p); if (pDispatch != NULL) { VariantClear(&varResult); pvars[1].vt = VT_BSTR; pvars[1].bstrVal = srcWindowId; pvars[0].vt = VT_BSTR; pvars[0].bstrVal = json; DISPPARAMS disp = { pvars, NULL, 2, 0 }; pDispatch->Invoke(0x1, IID_NULL, LOCALE_USER_DEFAULT, DISPATCH_METHOD, &disp, &varResult, NULL, NULL); } } delete[] pvars; return varResult.scode; } Thanks!

    Read the article

  • NHibernate unmapped class exception

    - by John Prideaux
    I am trying to implement a one-to-many relationship using NHibernate 2.1.2 but keep getting "Association references unmapped class" exceptions. I have verified that my hbm.xml files are embedded resource. Here are my classes and mappings. Any ideas? public class OrderStatus { public virtual decimal MainCommit { get; set; } public virtual decimal CommitNumber { get; set; } public virtual string InvoiceNumber { get; set; } public virtual string ShipTo { get; set; } public virtual string CustomerOrderNumber { get; set; } public virtual string Station { get; set; } public virtual DateTime RequestedShipDate { get; set; } public virtual decimal EstimatedValue { get; set; } public virtual decimal EstimatedWeight { get; set; } public virtual string Customer { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual ICollection<Promise> Promises { get; set; } } <class name="AladdinDb.Models.OrderStatus, AladdinDb" table="vorder_status"> <id name="CommitNumber" type="decimal" column="commit_no"> <generator class="assigned"> <param name="property"> Plan </param> </generator> </id> <property name="MainCommit" column="main_commit" type="decimal" /> <property name="InvoiceNumber" column="invoice_no" type="string" /> <property name="ShipTo" column="ship_to" type ="string"/> <property name="CustomerOrderNumber" column="cust_order_no" type="string" /> <property name="Station" column="station" type="string" /> <property name="RequestedShipDate" column="req_ship_date" type="DateTime" /> <property name="EstimatedValue" column="estimated_value" type="decimal"/> <property name="EstimatedWeight" column="estimated_weight" type="decimal" /> <property name="Customer" column="customer" type="string" /> <property name="InvoiceDate" column="invoice_date" /> <set name="Promises"> <key column="commit_no"></key> <one-to-many class="Promise" /> </set> </class> public class Promise { public virtual decimal CommitNumber { get; set; } public virtual DateTime PromiseDate { get; set; } public virtual string WhoAsked { get; set; } public virtual string WhoGave { get; set; } public virtual string Iffy { get; set; } } <class name="AladdinDb.Models.Promise, AladdinDb" table="promise"> <id name="CommitNumber" type="decimal" column="commit_no"> <generator class="assigned" /> </id> <property name="PromiseDate" column="promise_date" /> <property name="WhoAsked" column="who_asked" /> <property name="WhoGave" column="who_gave" /> <property name="Iffy" column="iffy" /> </class>

    Read the article

  • Is possible to make mt.exe embed manifest files correctly in Visual Studio 2008?

    - by Sorin Sbarnea
    I found that mt.exe fails to correctly create and embed manifest files into executables when run inside a VCPROJ. For example the same executable load well on Windows 7 but failed to load on Windows XP. The manifest was embedded and correct. After I spend lots of hours searching for possible reasons and solution I modified the project settings to generate the manifest outside the exe file. Now it works on both systems. Here are the examples for debug builds. With embed disabled: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugMFC" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> This is with embed enabled: <?xml version="1.0" encoding="UTF-8" standalone="yes" ?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugCRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.DebugMFC" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.Windows.Common-Controls" version="6.0.0.0" processorArchitecture="x86" publicKeyToken="6595b64144ccf1df" language="*" /> </dependentAssembly> </dependency> </assembly> If you compare them the second one adds common controls (I don't know from where) and also it is a small difference with the syntax of requestedExecutionLevel tag.

    Read the article

  • Maven changelog plugin with Mercurial problem

    - by doom2.wad
    I have configured my Maven2 project to generate a changelog report from a Mercurial repository (accessible via file:// protocol) but the goal execution fails with the following message: + Error stacktraces are turned on. [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'changelog'. [INFO] ------------------------------------------------------------------------ [INFO] Building Phobos3 Prototype [INFO] task-segment: [changelog:changelog] [INFO] ------------------------------------------------------------------------ [INFO] [changelog:changelog {execution: default-cli}] [INFO] Generating changed sets xml to: D:\Documents and Settings\501845922\Workspace\phobos3.prototype\target\changelog.xml [INFO] EXECUTING: hg log --verbose [WARNING] Could not figure out: abort: Invalid argument [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Provider message: [ERROR] EXECUTION FAILED Execution of cmd : log failed with exit code: -1. Working directory was: D:\Documents and Settings\501845922\Workspace\phobos3.prototype Your Hg installation seems to be valid and complete. Hg version: 1.4.3+20100201 (OK) [ERROR] Command output: [ERROR] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] An error has occurred in Change Log report generation. Embedded error: An error has occurred during changelog command : Command failed. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: An error has occurred in Change Log report generation. at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:79) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.reporting.MavenReportException: An error has occurred during changelog command : at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:555) at org.apache.maven.plugin.changelog.ChangeLogReport.getChangedSets(ChangeLogReport.java:393) at org.apache.maven.plugin.changelog.ChangeLogReport.executeReport(ChangeLogReport.java:340) at org.apache.maven.reporting.AbstractMavenReport.generate(AbstractMavenReport.java:98) at org.apache.maven.reporting.AbstractMavenReport.execute(AbstractMavenReport.java:73) ... 19 more Caused by: org.apache.maven.plugin.MojoExecutionException: Command failed. at org.apache.maven.plugin.changelog.ChangeLogReport.checkResult(ChangeLogReport.java:705) at org.apache.maven.plugin.changelog.ChangeLogReport.generateChangeSetsFromSCM(ChangeLogReport.java:467) ... 23 more [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3 seconds [INFO] Finished at: Thu Apr 29 17:10:06 CEST 2010 [INFO] Final Memory: 5M/10M [INFO] ------------------------------------------------------------------------ What did I miss in a configuration? (I hope it is a configuration problem not a Maven plugins related bug!:) My repository URL seems to be ok (the plugin has been complaining before, I fixed that up), I also set a date format for parsing (also been complaining, also fixed). target/changelog.xml being promised was not generated at all. Maven 2.2.1 Mercurial 1.4.3 Windows XP SP3 mvn scm:changelog command provides an expected output. Thanks for any suggestions, I haven't googled up anything (nor binged up;).

    Read the article

  • How to configure Spring Security PasswordComparisonAuthenticator

    - by denlab
    I can bind to an embedded ldap server on my local machine with the following bean: <b:bean id="secondLdapProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider"> <b:constructor-arg> <b:bean class="org.springframework.security.ldap.authentication.BindAuthenticator"> <b:constructor-arg ref="contextSource" /> <b:property name="userSearch"> <b:bean id="userSearch" class="org.springframework.security.ldap.search.FilterBasedLdapUserSearch"> <b:constructor-arg index="0" value="ou=people"/> <b:constructor-arg index="1" value="(uid={0})"/> <b:constructor-arg index="2" ref="contextSource" /> </b:bean> </b:property> </b:bean> </b:constructor-arg> <b:constructor-arg> <b:bean class="com.company.security.ldap.BookinLdapAuthoritiesPopulator"> </b:bean> </b:constructor-arg> </b:bean> however, when I try to authenticate with a PasswordComparisonAuthenticator it repeatedly fails on a bad credentials event: <b:bean id="ldapAuthProvider" class="org.springframework.security.ldap.authentication.LdapAuthenticationProvider"> <b:constructor-arg> <b:bean class="org.springframework.security.ldap.authentication.PasswordComparisonAuthenticator"> <b:constructor-arg ref="contextSource" /> <b:property name="userDnPatterns"> <b:list> <b:value>uid={0},ou=people</b:value> </b:list> </b:property> </b:bean> </b:constructor-arg> <b:constructor-arg> <b:bean class="com.company.security.ldap.BookinLdapAuthoritiesPopulator"> </b:bean> </b:constructor-arg> </b:bean> Through debugging, I can see that the authenticate method picks up the DN from the ldif file, but then tries to compare the passwords, however, it's using the LdapShaPasswordEncoder (the default one) where the password is stored in plaintext in the file, and this is where the authentication fails. Here's the authentication manager bean referencing the preferred authentication bean: <authentication-manager> <authentication-provider ref="ldapAuthProvider"/> <authentication-provider user-service-ref="userDetailsService"> <password-encoder hash="md5" base64="true"> <salt-source system-wide="secret"/> </password-encoder> </authentication-provider> </authentication-manager> On a side note, whether I set the password-encoder on ldapAuthProvider to plaintext or just leave it blank, doesn't seem to make a difference. Any help would be greatly appreciated. Thanks

    Read the article

  • c#: exporting swf object as image to Word

    - by Lynn
    Hello in my Asp.net web page (C# on backend) I use a Repeater, whose items consist of a title and a Flex chart (embedded .swf file). I am trying to export the contents of the Repeater to a Word document. My problem is to convert the SWF files into images and pass it on to the Word document. The swf object has a public function which returns a byteArray representation of itself (public function grabScreen():ByteArray), but I do not know how to call it directly from c#. I have access to the mxml files, so I can make modifications to the swf files, if needed. The code is shown below, and your help is appreciated :) .aspx <asp:Button ID="Button1" runat="server" text="export to Word" onclick="print2"/> <asp:Repeater ID="rptrQuestions" runat="server" OnItemDataBound="rptrQuestions_ItemDataBound" > ... <ItemTemplate> <tr> <td> <div align="center"> <asp:Label class="text" Text='<%#DataBinder.Eval(Container.DataItem, "Question_title")%>' runat="server" ID="lbl_title" NAME="lbl_title"/> <br> </div> </td> </tr> <tr><td> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" id="result_survey" width="100%" height="100%" codebase="http://fpdownload.macromedia.com/get/flashplayer/current/swflash.cab"> <param name="movie" value="result_survey.swf" /> <param name="quality" value="high" /> <param name="bgcolor" value="#ffffff" /> <param name="allowScriptAccess" value="sameDomain" /> <param name="flashvars" value='<%#DataBinder.Eval(Container.DataItem, "rank_order")%>' /> <embed src="result_survey.swf?rankOrder='<%#DataBinder.Eval(Container.DataItem, "rank_order")%>' quality="high" bgcolor="#ffffff" width="100%" height="100%" name="result_survey" align="middle" play="true" loop="false" allowscriptaccess="sameDomain" type="application/x-shockwave-flash" pluginspage="http://www.adobe.com/go/getflashplayer"> </embed> </object> </td></tr> </ItemTemplate> </asp:Repeater> c# protected void print2(object sender, EventArgs e) { HttpContext.Current.Response.Clear(); HttpContext.Current.Response.Charset = ""; HttpContext.Current.Response.ContentEncoding = System.Text.Encoding.UTF7; HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.NoCache); HttpContext.Current.Response.ContentType = "application/msword"; HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=" + "Report.doc"); EnableViewState = false; System.IO.StringWriter sw = new System.IO.StringWriter(); HtmlTextWriter htw = new HtmlTextWriter(sw); // Here I render the Repeater foreach (RepeaterItem row in rptrQuestions.Items) { row.RenderControl(htw); } StringBuilder sb1 = new StringBuilder(); sb1 = sb1.Append("<table>" + sw.ToString() + "</table>"); HttpContext.Current.Response.Write(sb1.ToString()); HttpContext.Current.Response.Flush(); HttpContext.Current.Response.End(); } .mxml //################################################## // grabScreen (return image representation of the SWF movie (snapshot) //###################################################### public function grabScreen() : ByteArray { return ImageSnapshot.captureImage( boxMain, 0, new PNGEncoder() ).data(); }

    Read the article

  • Backbone.js Model change events in nested collections not firing as expected

    - by Pallavi Kaushik
    I'm trying to use backbone.js in my first "real" application and I need some help debugging why certain model change events are not firing as I would expect. I have a web service at /employees/{username}/tasks which returns a JSON array of task objects, with each task object nesting a JSON array of subtask objects. For example, [{ "id":45002, "name":"Open Dining Room", "subtasks":[ {"id":1,"status":"YELLOW","name":"Clean all tables"}, {"id":2,"status":"RED","name":"Clean main floor"}, {"id":3,"status":"RED","name":"Stock condiments"}, {"id":4,"status":"YELLOW","name":"Check / replenish trays"} ] },{ "id":47003, "name":"Open Registers", "subtasks":[ {"id":1,"status":"YELLOW","name":"Turn on all terminals"}, {"id":2,"status":"YELLOW","name":"Balance out cash trays"}, {"id":3,"status":"YELLOW","name":"Check in promo codes"}, {"id":4,"status":"YELLOW","name":"Check register promo placards"} ] }] Another web service allows me to change the status of a specific subtask in a specific task, and looks like this: /tasks/45002/subtasks/1/status/red [aside - I intend to change this to a HTTP Post-based service, but the current implementation is easier for debugging] I have the following classes in my JS app: Subtask Model and Subtask Collection var Subtask = Backbone.Model.extend({}); var SubtaskCollection = Backbone.Collection.extend({ model: Subtask }); Task Model with a nested instance of a Subtask Collection var Task = Backbone.Model.extend({ initialize: function() { // each Task has a reference to a collection of Subtasks this.subtasks = new SubtaskCollection(this.get("subtasks")); // status of each Task is based on the status of its Subtasks this.update_status(); }, ... }); var TaskCollection = Backbone.Collection.extend({ model: Task }); Task View to renders the item and listen for change events to the model var TaskView = Backbone.View.extend({ tagName: "li", template: $("#TaskTemplate").template(), initialize: function() { _.bindAll(this, "on_change", "render"); this.model.bind("change", this.on_change); }, ... on_change: function(e) { alert("task model changed!"); } }); When the app launches, I instantiate a TaskCollection (using the data from the first web service listed above), bind a listener for change events to the TaskCollection, and set up a recurring setTimeout to fetch() the TaskCollection instance. ... TASKS = new TaskCollection(); TASKS.url = ".../employees/" + username + "/tasks" TASKS.fetch({ success: function() { APP.renderViews(); } }); TASKS.bind("change", function() { alert("collection changed!"); APP.renderViews(); }); // Poll every 5 seconds to keep the models up-to-date. setInterval(function() { TASKS.fetch(); }, 5000); ... Everything renders as expected the first time. But at this point, I would expect either (or both) a Collection change event or a Model change event to get fired if I change a subtask's status using my second web service, but this does not happen. Funnily, I did get change events to fire if I added one additional level of nesting, with the web service returning a single object that has the Tasks Collection embedded, for example: "employee":"pkaushik", "tasks":[{"id":45002,"subtasks":[{"id":1..... But this seems klugey... and I'm afraid I haven't architected my app right. I'll include more code if it helps, but this question is already rather verbose. Thoughts?

    Read the article

  • STL operator= behavior change with Visual Studio 2010?

    - by augnob
    Hi, I am attempting to compile QtScriptGenerator (gitorious) with Visual Studio 2010 (C++) and have run into a compile error. In searching for a solution, I have seen occasional references to compile breakages introduced since VS2008 due to changes in VS2010's implementation of STL and/or c++0x conformance changes. Any ideas what is happening below, or how I could go about fixing it? If the offending code appeared to be QtScriptGenerator's, I think I would have an easier time fixing it.. but it appears to me that the offending code may be in VS2010's STL implementation and I may be required to create a workaround? PS. I am pretty unfamiliar with templates and STL. I have a background in embedded and console projects where such things have until recently often been avoided to reduce memory consumption and cross-compiler risks. C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(275) : error C2679: binary '=' : no operator found which takes a right-hand operand of type 'rpp::pp_output_iterator<_Container>' (or there is no acceptable conversion) with [ _Container=std::string ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-iterator.h(75): could be 'rpp::pp_output_iterator<_Container> &rpp::pp_output_iterator<_Container>::operator =(const char &)' with [ _Container=std::string ] while trying to match the argument list '(rpp::pp_output_iterator<_Container>, rpp::pp_output_iterator<_Container>)' with [ _Container=std::string ] C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(2176) : see reference to function template instantiation '_Iter &std::_Rechecked<_OutIt,_OutIt>(_Iter &,_UIter)' being compiled with [ _Iter=rpp::pp_output_iterator<std::string>, _OutIt=rpp::pp_output_iterator<std::string>, _UIter=rpp::pp_output_iterator<std::string> ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-internal.h(83) : see reference to function template instantiation '_OutIt std::copy<std::_String_iterator<_Elem,_Traits,_Alloc>,_OutputIterator>(_InIt,_InIt,_OutIt)' being compiled with [ _OutIt=rpp::pp_output_iterator<std::string>, _Elem=char, _Traits=std::char_traits<char>, _Alloc=std::allocator<char>, _OutputIterator=rpp::pp_output_iterator<std::string>, _InIt=std::_String_iterator<char,std::char_traits<char>,std::allocator<char>> ] c:\qt\qtscriptgenerator\generator\parser\rpp\pp-engine-bits.h(500) : see reference to function template instantiation 'void rpp::_PP_internal::output_line<_OutputIterator>(const std::string &,int,_OutputIterator)' being compiled with [ _OutputIterator=rpp::pp_output_iterator<std::string> ] C:\Program Files\Microsoft Visual Studio 10.0\VC\INCLUDE\xutility(275) : error C2582: 'operator =' function is unavailable in 'rpp::pp_output_iterator<_Container>' with [ _Container=std::string ] Here's some context.. pp-internal.h-- #ifndef PP_INTERNAL_H #define PP_INTERNAL_H #include <algorithm> #include <stdio.h> namespace rpp { namespace _PP_internal { .. 68 template <typename _OutputIterator> 69 void output_line(const std::string &__filename, int __line, _OutputIterator __result) 70 { 71 std::string __msg; 72 73 __msg += "# "; 74 75 char __line_descr[16]; 76 pp_snprintf (__line_descr, 16, "%d", __line); 77 __msg += __line_descr; 78 79 __msg += " \""; 80 81 if (__filename.empty ()) 82 __msg += "<internal>"; 83 else 84 __msg += __filename; 85 86 __msg += "\"\n"; 87 std::copy (__msg.begin (), __msg.end (), __result); 88 }

    Read the article

  • Adding UL to jQuery UI Tabs

    - by Dave Kiss
    It seems like whenever I try to add a UL inside of the containers when using jQuery UI - Tabs, it breaks the javascript. Is there a way I can use a UL inside of these containers that I am missing? Thanks <div id="tabs"> <div id="fragment-1"> <h4>Pre-Press Requirements</h4> ? SAMPLE of final artwork with noted sizes. ? NATIVE FILES & High Resolution PDF preferred. Files must be created or saved to the listed accepted file formats. ? FONTS used in files need to be supplied in a separate folder marked "fonts". Please ensure all families (screen & printer) fonts are supplied for the job. ? IMAGES must be saved as CMYK and no less than 300dpi. NO RGB FILES! We prefer images to be TIF or EPS formats. If you are submitting artwork for spot color printing vector artwork is preferred. Your images should be supplied in a separate folder marked "links". This will ensure proper reproduction of your artwork. ? COLORS need to be clearly specified. Pantone (PMS) colors preferred. Please specify if job is to be printed CMYK, spot color, etc. ? BLEEDS should be no less than .25" ? PDF's should be High Resolution. Please include any spot colors or CMYK format for full color printing. NO RGB FILES! All bleeds should be included with trim marks. All fonts must be embedded or outlined. No "layered" PDF files. </div> <div id="fragment-2"> <p>Our presses are all capable of sizes up to 11" x 17" using spot color or full color.</p> </div> <div id="fragment-3"> <p>USA Quickprint has complete in house bindery to finish each job to meet your needs.</p> <ul> <li>Folding</li> <li>Scoring</li> <li>Perforation</li> <li>Drilling</li> <li>Shrink Wrap</li> <li>Trimming</li> <li>Collating</li> <li>Spiral Binding</li> <li>GBC Binding</li> <li>Padding</li> <li>Stapling</li> <li>Numbering</li> <li>Lamination</li> </ul> </div>

    Read the article

  • jquery Uncaught SyntaxError: Unexpected token )

    - by js00831
    I am getting a jquery error in my code... can you guys tell me whats the reason for this when you click the united states red color button in this link you will see this error http://jsfiddle.net/SSMX4/88/embedded/result/ function checkCookie(){ var cookie_locale = readCookie('desired-locale'); var show_blip_count = readCookie('show_blip_count'); var tesla_locale = 'en_US'; //default to US var path = window.location.pathname; // debug.log("path = " + path); var parsed_url = parseURL(window.location.href); var path_array = parsed_url.segments; var path_length = path_array.length var locale_path_index = -1; var locale_in_path = false; var locales = ['en_AT', 'en_AU', 'en_BE', 'en_CA', 'en_CH', 'de_DE', 'en_DK', 'en_GB', 'en_HK', 'en_EU', 'jp', 'nl_NL', 'en_US', 'it_IT', 'fr_FR', 'no_NO'] // see if we are on a locale path $.each(locales, function(index, value){ locale_path_index = $.inArray(value, path_array); if (locale_path_index != -1) { tesla_locale = value == 'jp' ? 'ja_JP':value; locale_in_path = true; } }); // debug.log('tesla_locale = ' + tesla_locale); cookie_locale = (cookie_locale == null || cookie_locale == 'null') ? false:cookie_locale; // Only do the js redirect on the static homepage. if ((path_length == 1) && (locale_in_path || path == '/')) { debug.log("path in redirect section = " + path); if (cookie_locale && (cookie_locale != tesla_locale)) { // debug.log('Redirecting to cookie_locale...'); var path_base = ''; switch (cookie_locale){ case 'en_US': path_base = path_length > 1 ? path_base:'/'; break; case 'ja_JP': path_base = '/jp' break; default: path_base = '/' + cookie_locale; } path_array = locale_in_path != -1 ? path_array.slice(locale_in_path):path_array; path_array.unshift(path_base); window.location.href = path_array.join('/'); } } // only do the ajax call if we don't have a cookie if (!cookie_locale) { // debug.log('doing the cookie check for locale...') cookie_locale = 'null'; var get_data = {cookie:cookie_locale, page:path, t_locale:tesla_locale}; var query_country_string = parsed_url.query != '' ? parsed_url.query.split('='):false; var query_country = query_country_string ? (query_country_string.slice(0,1) == '?country' ? query_country_string.slice(-1):false):false; if (query_country) { get_data.query_country = query_country; } $.ajax({ url:'/check_locale', data:get_data, cache: false, dataType: "json", success: function(data){ var ip_locale = data.locale; var market = data.market; var new_locale_link = $('#locale_pop #locale_link'); if (data.show_blip && show_blip_count < 3) { setTimeout(function(){ $('#locale_msg').text(data.locale_msg); $('#locale_welcome').text(data.locale_welcome); new_locale_link[0].href = data.new_path; new_locale_link.text(data.locale_link); new_locale_link.attr('rel', data.locale); if (!new_locale_link.hasClass(data.locale)) { new_locale_link.addClass(data.locale); } $('#locale_pop').slideDown('slow', function(){ var hide_blip = setTimeout(function(){ $('#locale_pop').slideUp('slow', function(){ var show_blip_count = readCookie('show_blip_count'); if (!show_blip_count) { createCookie('show_blip_count',1,360); } else if (show_blip_count < 3 ) { var b_count = show_blip_count; b_count ++; eraseCookie('show_blip_count'); createCookie('show_blip_count',b_count,360); } }); },10000); $('#locale_pop').hover(function(){ clearTimeout(hide_blip); },function(){ setTimeout(function(){$('#locale_pop').slideUp();},10000); }); }); },1000); } } }); } }

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >