content
stringlengths
228
999k
pred_label
stringclasses
1 value
pred_score
float64
0.5
1
How To View Android Files On Pc? Method 1 Using the USB Cable • Attach the cable to your PC. • Plug the free end of the cable into your Android. • Allow your computer to access your Android. • Enable USB access if necessary. • Open Start. • Open This PC. • Double-click your Android’s name. • Double-click your Android’s storage. How can I access my phone internal storage from my computer? The first way is to access Android files from PC via USB cable without other tools. First, open the USB debug mode and plug in the USB cable. If you want to manage files in the SD card, change the connection mode to USB storage. If you want to manage the files in the internal memory, switch the connection mode to PTP. How do I find my files on Android Windows 10? Windows 10 Doesn’t Recognize My Android Device, What To Do? 1. On your Android device open Settings and go to Storage. 2. Tap the more icon in the top right corner and choose USB computer connection. 3. From the list of options select Media device (MTP). 4. Connect your Android device to your computer, and it should be recognized. Can I access Android root files from PC? Access Android Files on Windows PC. To access Android files and folders on Windows PC over WiFi, we are going to use the popular file manager ES File Explorer. To start off, install ES File Explorer if you haven’t already. Can I connect my Android phone to my PC? It’s easy to do. Connect the USB cable that shipped with your phone to your computer, then plug it into the phone’s USB port. Next, on your Android device, open Settings > Network & internet > Hotspot & tethering. Tap the USB tethering option. READ  Question: How To Use Dolphin Emulator On Android? How can I access my Android phone from PC without unlocking? Here’s how to use Android Control. • Step 1: Install ADB on your PC. • Step 2: Once the command prompt is open enter the following code: • Step 3: Reboot. • Step 4: At this point, simply connect your Android device to your PC and the Android Control Screen will popup allowing you to control your device via your computer. How do I access internal storage? Tap a folder to browse. If you’ve inserted an SD card into your Android, you’ll see two folders or drive icons—one for the SD card (called SD card or Removable Storage), and another for the internal memory (called Internal Storage or Internal Memory). Tap a file to open it in its default app. Where are my downloaded files on Android? Steps 1. Open the app drawer. This is the list of apps on your Android. 2. Tap Downloads, My Files, or File Manager. The name of this app varies by device. 3. Select a folder. If you only see one folder, tap its name. 4. Tap Download. You may have to scroll down to find it. How do I get my computer to recognize my USB device? Method 4: Reinstall USB controllers. • Select Start, then type device manager in the Search box, and then select Device Manager. • Expand Universal Serial Bus controllers. Press and hold (or right-click) a device and select Uninstall. • Once complete, restart your computer. Your USB controllers will automatically install. How do I transfer files between computers? To ease your transition between PCs, here are six ways you can transfer your data. 1. Use OneDrive to transfer your data. 2. Use an external hard drive to transfer your data. 3. Use a transfer cable to transfer your data. 4. Use PCmover to transfer your data. 5. Use Macrium Reflect to clone your hard drive. 6. Sharing files without HomeGroup. How do I transfer files from ES File Explorer to PC? To share files between your Android device and a Windows PC using ES File Explorer, follow the steps below: • Step 1: Create a shared folder on your Windows PC. • Step 2: In ES File Explorer on your Android device, tap the globe icon in the upper-left hand corner, then navigate to Network > LAN. How do I access files using ADB? Using ADB Push to Copy a File to Android 1. Connect the USB cable to the device from the computer. 2. Move/copy the file to the same folder as your ADB tools. 3. Launch a Command Prompt or PowerShell in that same folder. 4. Type the following command. . . 5. adb push <local file> <remote location> 6. . . . How do I access files on Android? How to Use Android’s Built-in File Manager • Browse the file system: Tap a folder to enter it and view its contents. • Open files: Tap a file to open it in an associated app, if you have an app that can open files of that type on your Android device. • Select one or more files: Long-press a file or folder to select it. READ  Quick Answer: How To Leave A Group Text On Android? How do I connect my Android to my PC wirelessly? Transfer data wirelessly to your Android device 1. Download Software Data Cable here. 2. Make sure your Android device and your computer are both attached to the same Wi-Fi network. 3. Launch the app and tap Start Service in the lower left. 4. You should see an FTP address near the bottom of your screen. 5. You should see a list of folders on your device. How do I connect my Android phone to Windows 10? Connect Android or iOS Phone to Windows 10 • On your Windows 10 PC, open Settings app. • Click on the Phone option. • Now, to connect your Android or iOS device to Windows 10, you can start by clicking Add a phone. • On the new window that appears, choose your country code and fill in your mobile number. How can I remotely access my PC from my Android phone? Follow these steps to get started with Remote Desktop on your Android device: 1. Download the Remote Desktop client from Google Play. 2. Set up your PC to accept remote connections. 3. Add a Remote Desktop connection or a remote resource. 4. Create a widget so you can get to Remote Desktop quickly. How can I access my Android phone from PC? Method 1 Using the USB Cable • Attach the cable to your PC. • Plug the free end of the cable into your Android. • Allow your computer to access your Android. • Enable USB access if necessary. • Open Start. • Open This PC. • Double-click your Android’s name. • Double-click your Android’s storage. How can I retrieve data from a locked phone? Steps To Retrieve Data From Locked Android With Broken Screen 1. Step 1: Connect Your Android Phone To Computer. 2. Step 2: Select The File Types That You Wish To Recover From Broken Phone. 3. Step 3: Select The Problem That Matches Your Phone State. 4. Step 4: Enter Into Download Mode On The Android Device. How can I access my broken phone from my computer without USB debugging? Enable USB Debugging without Touching Screen • With a workable OTG adapter, connect your Android phone with a mouse. • Click the mouse to unlock your phone and turn on USB debugging on Settings. • Connect the broken phone to computer and the phone will be recognized as external memory. How do I access internal storage on Android? Tap it to open the device’s Settings menu. Select “Storage.” Scroll down the Settings menu to locate the “Storage” option, and then tap on it to access the Device Memory screen. Check the phone’s total and available storage space. Where do I find my files? To view files in My Files: 1. From home, tap Apps > Samsung > My Files . 2. Tap a category to view the relevant files or folders. 3. Tap a file or folder to open it. READ  Quick Answer: How To Get Spotify Premium For Free Android? Where are game files stored on Android? Actually, the files of the Apps that you downloaded from the Play Store are stored on your phone. You can find it in your phone’s Internal Storage > Android > data > …. In some of the mobile phones, files are stored in SD Card > Android > data > How do I transfer files from desktop to laptop? Then go to Network on your laptop and choose show workgroup computers, and all the drives from your desktop will appear after that. The rest is click and drag the files to the designed drive on your laptop. Another way to transfer files between PCs is using Windows Easy Transfer (WET) application. What is the fastest way to transfer files between computers? Using an Ethernet Cable. This is one of the fastest method of transferring files between your computers. Connect the two PC’s to a network switch or use a crossover Ethernet cable and assign a private IP address to the two PC’s from the same subnet. Share the folders using the share wizard provided by Windows. What is the fastest way to transfer files between two computers? Steps • Ensure both computers are on same network. A Server Message Block (SMB) is a protocol (set of rules) for transferring files between computers over the internet. • Set up your server laptop. • Switch to the client laptop. • Access the files and begin the transfer. How do I access files on my Android phone? In this how-to, we’ll show you where the files are and what app to use to find them. 1. When you download e-mail attachments or Web files, they get placed in the “download” folder. 2. Once the file manager opens, select “Phone files.” 3. From the list of file folders, scroll down and select the “download” folder. How do I open file manager on Android? Go to the Settings app then tap Storage & USB (it’s under the Device subheading). Scroll to the bottom of the resulting screen then tap Explore: Just like that, you’ll be taken to a file manager that lets you get at just about any file on your phone. How do I unzip files on Android? How to Unzip Files on Android • Go to the Google Play Store and install Files by Google. • Open Files by Google and locate the ZIP file you want to unzip. • Tap the file you want to unzip. • Tap Extract to unzip the file. • Tap Done. • All of the extracted files are copied to the same location as the original ZIP file. Photo in the article by “DeviantArt” https://www.deviantart.com/pcapos/art/Naruto-ans-Sasuke-686195601
__label__pos
0.998743
Reputation 6,819 Top tag Next privilege 10,000 Rep. Access moderator tools Badges 14 55 106 Impact ~940k people reached Aug 22 awarded  Favorite Question Aug 19 comment Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? @peter You make a very good point, and thanks for explaining that. It's something I need to remember as a programmer. :) It's still a bit of a side issue for this question's purposes, though. My question was about why we can't get faster clock speeds; your answer is about why we don't currently need to. Aug 19 comment Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? @user20574 - right, but on that analogy, we'd be carrying larger and larger laptops as we move to multi-cores. Aug 17 awarded  Notable Question Aug 17 awarded  Notable Question Aug 16 accepted Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? Aug 16 comment Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? @user20574 You couldn't, if you started with a room that would fit only one computer, unless you found a way to shrink the computers dramatically. Aug 16 awarded  Good Question Aug 15 awarded  Popular Question Aug 15 comment Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? 'multiple cores is like having multiple "computers" on the same device.' Right, but my confusion was, how do you fit them all in there? I thought "we can't go faster" was a symptom of "we can't shrink things much more." Aug 15 comment Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? "you need more physical space to put the extra core. However, CPU process sizes constantly shrink a lot, so there's plenty of space to put two copies of a previous design" Maybe this is getting at my original confusion. I thought "faster chip" == "higher density of switches", so I thought "To fit more cores, you have to shrink them. If you can shrink them, you are making them denser. If you can make them denser, you can make them faster. How are these not the same problem?" But I'm way out of my realm of knowledge here. :) Aug 15 awarded  Nice Question Aug 15 asked Why doesn't “add more cores” face the same physical limitations as “make the CPU faster”? Jul 25 awarded  Famous Question Jul 16 awarded  Nice Question Jul 15 awarded  Yearling Jul 12 awarded  Popular Question Jul 10 awarded  Popular Question Jul 2 awarded  Inquisitive Jul 2 awarded  Curious
__label__pos
0.618424
Unix & Linux Stack Exchange is a question and answer site for users of Linux, FreeBSD and other Un*x-like operating systems. It's 100% free, no registration required. Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer 3. The best answers are voted up and rise to the top I'm reading about repositories in debian and i've found backports. But it sounds strange for me. I feel word back to be indicating older stable versions that are more compatible with less frequently updated programs. So why it's called backports? share|improve this question up vote 3 down vote accepted It's not about older stable versions, but still: stable means, that this version is not to be changed anymore. Debian includes new versions into experimental/unstable and maybe into testing. Some really important programs (like iceweasle ;) are also "backported" into the already stable versions (if you like back to the old testing). So that people who feel unwanting to use something less stable, still don't have to wait a whole year for the program. share|improve this answer It helps to understand what the action of porting software means. From Wikipedia, porting is the process of adapting software so that an executable program can be created for a computing environment that is different from the one for which it was originally designed (My emphasis added on the word different). The back in backporting denotes the porting of current software backwards to run on an older platform environment, like Debian Stable. share|improve this answer Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.892973
  Ready to get started? Learn more about CData API Server or sign up for a free trial: Learn More Automate Tasks in Microsoft Flow Using the CData API Server and Elasticsearch ADO.NET Provider Automate actions like sending emails to a contact list, posting to social media, or syncing CRM and ERP. Microsoft Flow makes it easy to automate tasks that involve data from multiple systems, on premises or in the cloud. With the CData API Server and Elasticsearch ADO.NET Provider (or any of 140+ other ADO.NET Providers), line-of-business users have a native way to create actions based on Elasticsearch triggers in Microsoft Flow; the API Server makes it possible for SaaS applications like Microsoft Flow to integrate seamlessly with Elasticsearch data through data access standards like Swagger and OData. This article shows how to use wizards in Microsoft Flow and the API Server for Elasticsearch to create a trigger -- entities that match search criteria -- and send an email based on the results. Set Up the API Server Follow the steps below to begin producing secure and Swagger-enabled Elasticsearch APIs: Deploy The API Server runs on your own server. On Windows, you can deploy using the stand-alone server or IIS. On a Java servlet container, drop in the API Server WAR file. See the help documentation for more information and how-tos. The API Server is also easy to deploy on Microsoft Azure, Amazon EC2, and Heroku. Connect to Elasticsearch After you deploy, provide authentication values and other connection properties by clicking Settings -> Connections in the API Server administration console. You can then choose the entities you want to allow the API Server access to by clicking Settings -> Resources. Set the Server and Port connection properties to connect. To authenticate, set the User and Password properties, PKI (public key infrastructure) properties, or both. To use PKI, set the SSLClientCert, SSLClientCertType, SSLClientCertSubject, and SSLClientCertPassword properties. The data provider uses X-Pack Security for TLS/SSL and authentication. To connect over TLS/SSL, prefix the Server value with 'https://'. Note: TLS/SSL and client authentication must be enabled on X-Pack to use PKI. Once the data provider is connected, X-Pack will then perform user authentication and grant role permissions based on the realms you have configured. You will also need to enable CORS and define the following sections on the Settings -> Server page. As an alternative, you can select the option to allow all domains without '*'. 1. Access-Control-Allow-Origin: Set this to a value of '*' or specify the domains that are allowed to connect. 2. Access-Control-Allow-Methods: Set this to a value of "GET,PUT,POST,OPTIONS". 3. Access-Control-Allow-Headers: Set this to "x-ms-client-request-id, authorization, content-type". Authorize API Server Users After determining the OData services you want to produce, authorize users by clicking Settings -> Users. The API Server uses authtoken-based authentication and supports the major authentication schemes. You can authenticate as well as encrypt connections with SSL. Access can also be restricted by IP address; access is restricted to only the local machine by default. For simplicity, we will allow the authtoken for API users to be passed in the URL. You will need to add a setting in the Application section of the settings.cfg file, located in the data directory. On Windows, this is the app_data subfolder in the application root. In the Java edition, the location of the data directory depends on your operation system: 1. Windows: C:\ProgramData\CData 2. Unix or Mac OS X: ~/cdata [Application] AllowAuthtokenInURL = true Add Elasticsearch Data to a Flow You can use the built-in HTTP + Swagger connector to use a wizard to design a Elasticsearch process flow: 1. In Microsoft Flow, click My Flows -> Create from Blank. 2. Select the Recurrence action and select a time interval for sending emails. This article uses 1 day. 3. Add an HTTP + Swagger action by searching for Swagger. 4. Enter the URL to the Swagger metadata document: https://MySite:MyPort/api.rsc/@MyAuthtoken/$swagger 5. Select the "Return Orders" operation. 6. Build the OData query to retrieve Elasticsearch data. This article defines the following OData filter expression in the $filter box: ShipCity eq 'New York' See the API Server help documentation for more on filtering and examples of the supported OData. Trigger an Action You can now work with Orders entities in your process flow. Follow the steps to send an automated email: 1. Add an SMTP - Send Email action. 2. Enter the address and credentials for the SMTP server and name the connection. Be sure to enable encryption if supported by your server. 3. Enter the message headers and body. You can add Elasticsearch columns in these boxes.    
__label__pos
0.532987
Passing double value to a function in C By: Emiley J Printer Friendly Format     This sample program in C demonstrates the concept of passing a double value to a function as parameter. The program accepts a number in feet and converts the feet into meters by using a conversion function. #include double feet_to_meter(double f); int main(void) { double feet; printf("Enter feet: "); scanf("%lf", &feet); printf("Meters: %f", feet_to_meter(feet)); return 0; } double feet_to_meter(double f) { return f / 3.28; } Ask a Question Most Viewed Articles (in C ) Latest Articles (in C) Comment on this tutorial
__label__pos
0.962182
c语言本题要求将输入的任意3个整数从小到大输出。 时间: 2024-05-15 08:09:55 浏览: 29 要实现将输入的任意3个整数从小到大输出,可以使用C语言的条件语句和比较运算符来进行排序。以下是一个示例代码: ```c #include <stdio.h> int main() { int a, b, c; printf("请输入三个整数:\n"); scanf("%d %d %d", &a, &b, &c); if (a > b) { int temp = a; a = b; b = temp; } if (a > c) { int temp = a; a = c; c = temp; } if (b > c) { int temp = b; b = c; c = temp; } printf("从小到大排序的结果为:%d %d %d\n", a, b, c); return 0; } ``` 在上述代码中,我们首先使用`scanf`函数从用户输入中获取三个整数,并将它们存储在变量`a`、`b`和`c`中。然后,通过一系列的条件语句和比较运算符,将这三个整数按照从小到大的顺序进行排序。最后,使用`printf`函数将排序后的结果输出给用户。 相关问题 c语言本题要求将输入的任意3个整数从小到大输出 好的,您可以尝试以下代码实现: ```c #include <stdio.h> int main() { int a, b, c, temp; printf("请输入三个整数:"); scanf("%d%d%d", &a, &b, &c); // 交换排序 if (a > b) { temp = a; a = b; b = temp; } if (a > c) { temp = a; a = c; c = temp; } if (b > c) { temp = b; b = c; c = temp; } printf("从小到大的顺序为:%d %d %d\n", a, b, c); return 0; } ``` 这段代码中,我们使用了交换排序的思想对输入的三个整数进行排序,最终输出从小到大排列的结果。 本题要求将输入的任意3个整数从小到大输出。C语言 可以使用冒泡排序来实现: ```c #include <stdio.h> int main() { int a, b, c, temp; scanf("%d%d%d", &a, &b, &c); if (a > b) { temp = a; a = b; b = temp; } if (a > c) { temp = a; a = c; c = temp; } if (b > c) { temp = b; b = c; c = temp; } printf("%d %d %d\n", a, b, c); return 0; } ``` 输入三个整数,然后使用三个 if 语句来进行比较和交换,最后输出排序后的结果。 相关推荐 最新推荐 recommend-type 智能城市手册:软件服务与赛博基础设施 "Handbook of Smart Cities" 是Springer在2018年出版的一本专著,由Muthucumaru Maheswaran和Elarbi Badidi编辑,旨在探讨智能城市的研究项目和关键问题。这本书面向通信系统、计算机科学和数据科学领域的研究人员、智能城市技术开发者以及研究生,涵盖了智能城市规模的赛博物理系统的各个方面。 本书包含14个章节,由研究智能城市不同方面的学者撰写。内容深入到软件服务和赛博基础设施等核心领域,为读者提供了智能城市的全面视角。书中可能讨论了如下知识点: 1. **智能城市定义与概念**:智能城市是运用信息技术、物联网、大数据和人工智能等先进技术,提升城市管理、服务和居民生活质量的城市形态。 2. **赛博物理系统(CPS)**:赛博物理系统是物理世界与数字世界的融合,它通过传感器、网络和控制系统实现对城市基础设施的实时监控和智能管理。 3. **软件服务**:在智能城市中,软件服务扮演着关键角色,如云平台、API接口、应用程序等,它们为城市提供高效的数据处理和信息服务。 4. **数据科学应用**:通过对城市产生的大量数据进行分析,可以发现模式、趋势,帮助决策者优化资源分配,改进公共服务。 5. **通信系统**:5G、物联网(IoT)、无线网络等通信技术是智能城市的基础,确保信息的快速传输和设备间的无缝连接。 6. **可持续发展与环保**:智能城市的建设强调环境保护和可持续性,如绿色能源、智能交通系统以减少碳排放。 7. **智慧城市治理**:通过数据驱动的决策支持系统,提升城市规划、交通管理、公共安全等领域的治理效率。 8. **居民参与**:智能城市设计也考虑了居民参与,通过公众平台收集反馈,促进社区参与和市民满意度。 9. **安全与隐私**:在利用数据的同时,必须确保数据安全和公民隐私,防止数据泄露和滥用。 10. **未来展望**:书中可能还涉及了智能城市的未来发展趋势,如边缘计算、人工智能在城市管理中的深化应用等。 此书不仅是学术研究的宝贵资源,也是实践者理解智能城市复杂性的指南,有助于推动相关领域的发展和创新。通过深入阅读,读者将能全面了解智能城市的最新进展和挑战,为实际工作提供理论支持和实践参考。 recommend-type 管理建模和仿真的文件 管理Boualem Benatallah引用此版本:布阿利姆·贝纳塔拉。管理建模和仿真。约瑟夫-傅立叶大学-格勒诺布尔第一大学,1996年。法语。NNT:电话:00345357HAL ID:电话:00345357https://theses.hal.science/tel-003453572008年12月9日提交HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaire recommend-type MySQL锁机制详解:并发控制与性能优化 ![MySQL锁机制详解:并发控制与性能优化](https://img-blog.csdnimg.cn/8b9f2412257a46adb75e5d43bbcc05bf.png) # 1. MySQL锁机制概述** MySQL锁机制是并发控制和性能优化的核心。它通过对数据访问进行控制,确保数据的一致性和完整性,同时最大限度地提高并发性。 锁机制的基本原理是:当一个事务需要访问数据时,它会获取一个锁,以防止其他事务同时访问该数据。锁的类型和粒度决定了对数据访问的限制程度。理解MySQL锁机制对于优化数据库性能和避免并发问题至关重要。 # 2. MySQL锁类型与粒度** **2.1 表级 recommend-type python爬虫案例➕可视化 Python爬虫案例通常用于从网站抓取数据,如新闻、产品信息等。一个常见的例子就是爬取豆瓣电影Top250的电影列表,包括电影名、评分和简介。首先,我们可以使用requests库获取网页内容,然后解析HTML结构,通常通过BeautifulSoup或 lxml 库帮助我们提取所需的数据。 对于可视化部分,可以将爬取到的数据存储在CSV或数据库中,然后利用Python的数据可视化库 Matplotlib 或 Seaborn 来创建图表。比如,可以制作柱状图展示每部电影的评分分布,或者折线图显示电影评分随时间的变化趋势。 以下是一个简单的示例: ```python import reques recommend-type Python程序员指南:MySQL Connector/Python SQL与NoSQL存储 "MySQL Connector/Python Revealed: SQL and NoSQL Data Storage 使用MySQL进行Python编程的数据库连接器详解" 本书由Jesper Wisborg Krogh撰写,是针对熟悉Python且计划使用MySQL作为后端数据库的开发者的理想指南。书中详细介绍了官方驱动程序MySQL Connector/Python的用法,该驱动程序使得Python程序能够与MySQL数据库进行通信。本书涵盖了从安装连接器到执行基本查询,再到更高级主题、错误处理和故障排查的整个过程。 首先,读者将学习如何安装MySQL Connector/Python,以及如何连接到MySQL并配置数据库访问。通过书中详尽的指导,你可以了解如何在Python程序中执行SQL和NoSQL查询。此外,书中还涉及了MySQL 8.0引入的新X DevAPI,这是一个跨语言的API,可以在命令行界面MySQL Shell中使用。通过实际代码示例,读者将深入理解API调用的工作原理,从而能够熟练地使用连接器。 随着阅读的深入,你将掌握如何利用MySQL作为Python程序的后台存储,并能够在SQL和NoSQL接口之间进行选择。书中特别强调了错误捕获和问题解决,帮助开发者在遇到问题时能迅速找到解决方案。此外,还探讨了如何利用MySQL的字符集支持存储不同语言的数据,这对于处理多语言项目至关重要。 最后,本书专门讲解了X DevAPI,它是所有MySQL语言连接器的基础。通过学习这一部分,开发者将能够理解和运用这一现代API来提升应用程序的性能和灵活性。 "MySQL Connector/Python Revealed"适合对Python有一定基础,希望进一步学习使用MySQL进行数据存储的读者。虽然不需要预先了解MySQL Connector/Python,但建议读者具备数据库和Python编程的基本知识。通过这本书,你将获得将MySQL集成到Python应用中的全面技能,无论你是偏好SQL的传统模式,还是倾向于NoSQL的灵活性。 recommend-type "互动学习:行动中的多样性与论文攻读经历" 多样性她- 事实上SCI NCES你的时间表ECOLEDO C Tora SC和NCESPOUR l’Ingén学习互动,互动学习以行动为中心的强化学习学会互动,互动学习,以行动为中心的强化学习计算机科学博士论文于2021年9月28日在Villeneuve d'Asq公开支持马修·瑟林评审团主席法布里斯·勒菲弗尔阿维尼翁大学教授论文指导奥利维尔·皮耶昆谷歌研究教授:智囊团论文联合主任菲利普·普雷教授,大学。里尔/CRISTAL/因里亚报告员奥利维耶·西格德索邦大学报告员卢多维奇·德诺耶教授,Facebook /索邦大学审查员越南圣迈IMT Atlantic高级讲师邀请弗洛里安·斯特鲁布博士,Deepmind对于那些及时看到自己错误的人...3谢谢你首先,我要感谢我的两位博士生导师Olivier和Philippe。奥利维尔,"站在巨人的肩膀上"这句话对你来说完全有意义了。从科学上讲,你知道在这篇论文的(许多)错误中,你是我可以依 recommend-type MySQL索引结构与算法:提升查询性能的基石 ![MySQL索引结构与算法:提升查询性能的基石](https://img-blog.csdnimg.cn/img_convert/019dcf34fad68a6bea31c354e88fd612.png) # 1. MySQL索引概述** MySQL索引是一种数据结构,它可以加快对数据库表中数据的访问速度。索引通过创建指向表中特定列或列组合的指针来工作,从而允许数据库快速查找特定值。 索引对于优化查询性能至关重要,特别是当表中包含大量数据时。通过使用索引,数据库可以避免对整个表进行全表扫描,从而显著减少查询时间。索引还可以提高数据插入、更新和删除操作的效率。 MySQL支持多种索引类型 recommend-type 1045-Access defined for user 'root'@'localhost'(using password:NO) 这是一条MySQL数据库的日志信息,它表示用户"root"在本地主机("localhost")上尝试访问数据,但是这次访问没有使用密码验证(using password:NO)。通常情况下,"root"用户的登录默认是无密码的,但这并不意味着在所有场景下都不需要输入密码。这条记录可能是root用户通过某种安全模式、SSH隧道或者其他不需要密码的方式进行连接。如果看到这样的提示并且你需要更改设置,可以考虑以后通过SSL加密或者配置特定的登录选项来添加密码保护。 recommend-type 布局基础:字体设计与平面设计原理 "Layout (Basics Design)" 是一本关于平面设计的专业书籍,专注于字体设计的基础知识和实践应用。作者Gavin Ambrose和Paul Harris通过本书向读者介绍了字体设计的历史、设计原则、方法以及如何在实际设计中运用字体。书中包含了全球顶尖设计师的字体设计案例,为读者提供了丰富的学习和参考素材。 在平面设计领域,布局(Layout)是至关重要的组成部分,它关乎到设计元素如文字、图像、色彩等按照一定的规划进行排列和组织。"Layout Basics 02" 提到了设计是一个按照计划安排各个部分的过程,强调了设计的系统性和逻辑性。这本书作为AVA Book系列的一部分,由AVAPublishing SA出版,并在全球范围内通过Thames & Hudson和Ingram Publisher Services发行。 内容涵盖的第二版版权属于AVAPublishing SA,首次出版于2005年。书中的所有内容都受到版权保护,未经许可,不得以任何形式复制、存储、传输。出版社提供了多种联系方式以满足读者的咨询和购买需求。 字体设计是平面设计中的核心技能,通过本书,读者可以了解到字体设计的发展历程,学习如何根据设计目的和视觉效果选择合适的字体风格。设计原则包括但不限于对比、重复、对齐和接近性,这些原则可以帮助设计师创建和谐且有影响力的布局。此外,书中展示的案例分析让读者有机会深入理解字体在实际项目中的应用,例如广告、标志、书籍排版、网页设计等。 学习这些基础知识对于任何想要提升设计技能的人来说都是必不可少的。通过深入研究字体的形状、大小、间距和排列,设计师能够创造出既美观又具有传达力的设计作品。此外,了解字体的历史背景也能帮助设计师理解不同字体风格背后的文化和时代特征,从而更恰当地选用字体,增强设计的内涵和深度。 "Layout (Basics Design)" 是一本为平面设计师或设计爱好者提供的宝贵教材,它不仅教导了基本的字体设计技巧,还通过实例展示了如何将理论知识转化为实践,对于提高设计师的审美和专业素养有着显著的作用。 recommend-type 关系数据表示学习 关系数据卢多维奇·多斯桑托斯引用此版本:卢多维奇·多斯桑托斯。关系数据的表示学习机器学习[cs.LG]。皮埃尔和玛丽·居里大学-巴黎第六大学,2017年。英语。NNT:2017PA066480。电话:01803188HAL ID:电话:01803188https://theses.hal.science/tel-01803188提交日期:2018年HAL是一个多学科的开放存取档案馆,用于存放和传播科学研究论文,无论它们是否被公开。论文可以来自法国或国外的教学和研究机构,也可以来自公共或私人研究中心。L’archive ouverte pluridisciplinaireUNIVERSITY PIERRE和 MARIE CURIE计算机科学、电信和电子学博士学院(巴黎)巴黎6号计算机科学实验室D八角形T HESIS关系数据表示学习作者:Ludovic DOS SAntos主管:Patrick GALLINARI联合主管:本杰明·P·伊沃瓦斯基为满足计算机科学博士学位的要求而提交的论文评审团成员:先生蒂埃里·A·退休记者先生尤尼斯·B·恩
__label__pos
0.844743
Presentation is loading. Please wait. Presentation is loading. Please wait. 1 Counting Techniques: Permutations of Selected Elements Addition Rule, Difference Rule, Inclusion/Exclusion Rule. Similar presentations Presentation on theme: "1 Counting Techniques: Permutations of Selected Elements Addition Rule, Difference Rule, Inclusion/Exclusion Rule."— Presentation transcript: 1 1 Counting Techniques: Permutations of Selected Elements Addition Rule, Difference Rule, Inclusion/Exclusion Rule 2 Permutations of Selected Elements  Typical situation: A chairman, a secretary and a treasurer are to be chosen in a committee of 7 people. Question: In how many different ways can it be done?  Definition: An r-permutation of a set S of n elements is an ordered selection of r elements taken from S.  The number of all r-permutations of a set of n elements is denoted P(n,r).  In the example above we want to find P(7,3). 3 3 How to compute P(n,r)  Theorem: P(n,r) = n(n-1)(n-2)…(n-r+1) or, equivalently,  Proof: Forming an r-permutation of a set of n-elements is an r-step operation:  Step 1: Choose the 1 st element ( n different ways ).  Step 2: Choose the 2 nd element ( n-1 different ways ). …  Step r: Choose the r th element (n-r+1 different ways ). Based on the multiplication rule, the number of r-permutations is n∙(n-1)∙…∙(n-r+1). 4 4 Examples of r-permutations 1. Choosing a chairman, a secretary and a treasurer among 7 people: P(7,3) = 7∙6∙5 = 210. 2. Suppose Jim is already chosen to be the secretary. Q: How many ways a chairman and a treasurer can be chosen? A: P(6,2) = 6∙5 = 30. 3. In an instance of the Traveling Salesman Problem, the total number of cities = 10; this time the salesman is supposed to visit only 4 cities (including the home city). Q: How many different tours are possible? A: P(9,3) = 9∙8∙7 = 504. 5 The Addition Rule Suppose a finite set A equals the union of k distinct mutually disjoint subsets A 1, A 2, …, A k. Thenn(A) = n(A 1 ) + n(A 2 ) + … + n(A k ) Example: How many integers from 1 through 999 do not have any repeated digits? Solution: Let A = integers from 1 to 999 not having repeated digits. Partition A into 3 sets: A 1 =one-digit integers not having repeated digits; A 2 =two-digit integers not having repeated digits; A 3 =three-digit integers not having repeated digits. Then n(A)=n(A 1 )+n(A 2 )+n(A 3 ) (by the addition rule) = 9 + 9∙9 +9∙9∙8 = 738. (by the multipl. rule) 6 6 The Difference Rule If A is a finite set and B is a subset of A, then n(A-B) = n(A) – n(B). Example: Assume that any seven digits can be used to form a telephone number. Q: How many seven-digit phone numbers have at least one repeated digit? Let A = the set of all possible 7-digit phone numbers; B = the set of 7-digit numbers without repetition. Note that B  A. Then A-B is the set of 7-digit numbers with repetition. n(A-B) = n(A) – n(B) (by the difference rule) = 10 7 – P(10,7) (by the multiplication rule) = 10 7 – 10! / 3! = 10,000,000 – 3,628,800/6 = = 10,000,000 – 604,800 = 9,395,200 7 7 The Inclusion/Exclusion Rule for Two or Three Sets  If A, B and C are finite sets then  n(A  B) = n(A) + n(B) – n(A  B)  n(A  B  C) = n(A) + n(B) + n(C) - n(A  B) – n(A  C) – n(B  C) + n(A  B  C) A B C A B 8 Example on Inclusion/Exclusion Rule (2 sets) Question: How many integers from 1 through 100 are multiples of 4 or multiples of 6 ? Solution: Let A=the set of integers from 1 through 100 which are multiples of 4; B = the set of integers from 1 through 100 which are multiples of 6. Then we want to find n(A  B). First note that A  B is the set of integers from 1 through 100 which are multiples of 12. n(A  B) = n(A) + n(B) - n(A  B) ( by incl./excl. rule ) = 25 + 16 – 8 = 33 ( by counting the elements of the three lists ) 9 Example on Inclusion/Exclusion Rule (3 sets) 3 headache drugs – A,B, and C – were tested on 40 subjects. The results of tests: 23 reported relief from drug A; 18 reported relief from drug B; 31 reported relief from drug C; 11 reported relief from both drugs A and B; 19 reported relief from both drugs A and C; 14 reported relief from both drugs B and C; 37 reported relief from at least one of the drugs. Questions: 1) How many people got relief from none of the drugs? 2) How many people got relief from all 3 drugs? 3) How many people got relief from A only? 10 10 Example on Inclusion/Exclusion Rule (3 sets) We are given: n(A)=23, n(B)=18, n(C)=31, n(A  B)=11, n(A  C)=19, n(B  C)=14, n(S)=40, n(A  B  C)=37 Q1) How many people got relief from none of the drugs? By difference rule, n((A  B  C) c ) = n(S) – n(A  B  C) = 40 - 37 = 3 A B C S 11 Example on Inclusion/Exclusion Rule (3 sets) Q2) How many people got relief from all 3 drugs? By inclusion/exclusion rule: n(A  B  C) = n(A  B  C) - n(A) - n(B) - n(C) + n(A  B) + n(A  C) + n(B  C) = 37 – 23 – 18 – 31 + 11 + 19 + 14 = 9 Q3) How many people got relief from A only? n(A – (B  C)) ( by inclusion/exclusion rule ) = n(A) – n(A  B) - n(A  C) + n(A  B  C) = 23 – 11 – 19 + 9 = 2 Download ppt "1 Counting Techniques: Permutations of Selected Elements Addition Rule, Difference Rule, Inclusion/Exclusion Rule." Similar presentations Ads by Google
__label__pos
0.998639
intros intros hyps Synopsis: intros intros-spec Pre-conditions: If hyps specifies a number of hypotheses to introduce, then the conclusion of the current sequent must be formed by at least that number of imbricated implications or universal quantifications. Action: It applies several times the right introduction rule for implication, closing the current sequent. New sequents to prove: It opens a new sequent to prove adding a number of new hypotheses equal to the number of new hypotheses requested. If the user does not request a precise number of new hypotheses, it adds as many hypotheses as possible. The name of each new hypothesis is either popped from the user provided list of names, or it is automatically generated when the list is (or becomes) empty.
__label__pos
0.963752
Creating an Annotation/Comment Report This example reads all annotations from all sub workspaces of the current one. 1. Create a report with a Stages Data Source and a Data Set. 2. Create the following Data Set result columns: NameType ProjectString ElementString ElementSubtypeString AnnotationNameString DescriptionString TimestampString LastChangeUserString ElementIdString ElementTypeString 3. Copy the Data Set script from the example into your Data Set. function getChildren(project){ var myprojects = project.getEntities("hierarchy::hierarchic@LOCAL,targetrole=children"); for each (myproject in myprojects) { saveColumn(myproject); getChildren(myproject); } } function existsInArray (element, array){ for (var i = 0; i <array.length; i++){ if (element.equals(array[i])){ return true; } } return false; } function saveColumn(project){ //Step 1: get project id, name and the process var id = project.getProperty("Id"); var projectName = project.getProperty("Name"); var process = project.getEntities("containsProcess@SYSTEM")[0]; //Step 2: get all annotations of the process if (process != null){ //get all elementtypes if ( process.getPkitClass().isAssociationValid("containsAnnotation::MODEL@SYSTEM")) { var allAnnotations = process.getEntities("containsAnnotation@SYSTEM"); for each (annotation in allAnnotations){ var importantAssocNames = new Array(); var allAssocsForAnnotation = annotation.getPkitClass().getAssociations(); if (allAssocsForAnnotation.length> 0) { //go through all assocs, filter out the ones we don't need and duplicates for each (assoc in allAssocsForAnnotation) { if ((assoc.getName() != "containsElement::MODEL@SYSTEM") && (assoc.getName() != "containsAnnotation::MODEL@SYSTEM")){ if (existsInArray(assoc.getName(),importantAssocNames)!=true){ importantAssocNames.push(assoc.getName()); } } } //go through all important assocs of the annotation, get the associated elements, their subtype, etc for each (oneImportantAssocName in importantAssocNames){ var elements = annotation.getEntities(oneImportantAssocName); for each (element in elements){ var elementType = element.getProperty("Type"); if (element.getProperty("SubType") == undefined){ var propertyPath = process.getProperty("Type") +".process.element.type.singular." + elementType.toLowerCase(); } else{ var propertyPath = process.getProperty("Type") +".process.element.type.singular." + elementType.toLowerCase() + "." + element.getProperty("SubType"); } dataset.setColumnValue("Project",projectName); dataset.setColumnValue("Element",element.getProperty("DisplayName")); dataset.setColumnValue("ElementSubtype",properties_de.getProperty(propertyPath)); dataset.setColumnValue("AnnotationName",annotation.getProperty("DisplayName")); dataset.setColumnValue("Description",annotation.getProperty("Description")); dataset.setColumnValue("Timestamp",annotation.getProperty("Timestamp")); dataset.setColumnValue("LastChangeUser",annotation.getProperty("LastChangeUser")); dataset.setColumnValue("ElementId",element.getProperty("Id")); dataset.setColumnValue("ElementType",elementType); dataset.storeResultRow(); } } } } }} } /////////// Start of script /////////// //properties_en = new Properties(); properties_de = new Properties(); //stream_en = new FileInputStream("tomcat/webapps/pkit/WEB-INF/classes/ LocalPKit.properties"); stream_de = new FileInputStream("tomcat/webapps/pkit/WEB-INF/classes/ LocalPKit_de.properties"); //properties_en.load(stream_en); properties_de.load(stream_de); //stream_en.close(); stream_de.close(); var currentProject = pkit.getCurrentProject(); saveColumn(currentProject); /* Iterate through all subprojects */ getChildren(currentProject);
__label__pos
0.986966
Click here to Skip to main content Click here to Skip to main content Gridview with SQL Paging , 21 Jul 2009 CPOL Rate this: Please Sign up or sign in to vote. A simple and detailed ASP.NET program using Gridview with paging in SQL 2005 Introduction This is a simple C# website that uses ASP Gridview to display records, but only displays partial data from the executed SQL Paging function. Background Back when I was not familiar with SQL 2005, I wondered what was new to SQL 2005 and how I could benefit from it. There I found the row_number() function, a function that is similar to table's auto-identity seeding, only that it is implemented during the execution of the query. Here in my sample program. I'll show you how to extend the capability of row_number() to your ASP.NET web page! Remember, the SQL I did here can be easily used as a stored procedure in your database. Using the Code Now, I'll discuss the GetSQL() method or our main SQL paging: string GetSQL() { /* My Generated SQL Paging */ return @" /* Here we declare our main variable, this will be your parameters when you use this as a Stored Procedure */ DECLARE @START AS INT , @MAX AS INT , @SORT AS VARCHAR(100) , @FIELDS AS VARCHAR(MAX) , @OBJECT AS VARCHAR(MAX) SELECT @START = {3} , @MAX = {4} , @SORT = '{2}' , @FIELDS = '{1}' , @OBJECT = '{0}' /* CLEANING PARAMETER VALUES */ IF (ISNULL(@SORT , '') = '') BEGIN SET @SORT = 'SELECT 1' END IF (@START < 1) BEGIN SET @START = 1 END IF (@MAX < 1) BEGIN SET @MAX = 1 END /* SET THE LENGTH OF RESULT */ DECLARE @END AS INT SET @END = (@START + (@MAX - 1)) /* Here we get the total rows therein based from the Object or main SQL Query given to the parameter @object */ /* GET THE TOTAL PAGE COUNT */ DECLARE @SQL_COUNT AS NVARCHAR(MAX) SET @TOTAL = 0 SET @SQL_COUNT = 'SELECT @GET_TOTAL = COUNT(*) FROM (' + @OBJECT + ') AS [TABLE_COUNT]' EXEC sp_executesql @SQL_COUNT, N'@GET_TOTAL INT OUTPUT', @GET_TOTAL = @TOTAL OUTPUT /* Here we are now creating the actual SQL paging script to produce the desired partial records */ /* GET THE RECORDS BASED FROM THE GIVEN STATEMENT AND CONDITION */ DECLARE @SQL AS NVARCHAR(MAX) SET @SQL = 'SELECT ' + @FIELDS + ' FROM ( SELECT (ROW_NUMBER() OVER(ORDER BY ' + @SORT + ')) AS [ROWNUM] , * FROM ( SELECT ' + @FIELDS + ' FROM (' + @OBJECT + ') AS [SOURCE_TABLE] ) AS [SOURCE_COLLECTION] ) AS TMP WHERE [ROWNUM] BETWEEN ' + CAST(@START AS VARCHAR(10)) + ' AND ' + CAST(@END AS VARCHAR(10)) + ' ' EXEC(@SQL) /* we now execute the script */ "; } Now we go to assigning values to a method.  Here, we assign the Object or the main Query. We can use SQL VIEWS, but for this example, we assign plain query. string MAIN_SQL = @" Select A.ProductId, A.ProductName, A.UnitPrice, A.UnitsInStock, B.CompanyName From Products AS A Inner Join Suppliers AS B on (B.SupplierId = A.SupplierId)"; Here, we assign the fields to be displayed on our GridView: string FIELDS_TO_DISPLAY = "ProductId, ProductName, UnitPrice, UnitsInStock,CompanyName"; Here, we assign the Fields to be sorted. In our example, it's just one field and is sorted Ascending: string FIELDS_TO_BE_SORT = " ProductName ASC "; Now we simply put the variable on its index assignment. Remember, if you are using a Stored Procedure, it will be much easier and more descriptive because in our example, we simply used the string format we have from .NET. SQL = string.Format(SQL, MAIN_SQL, FIELDS_TO_DISPLAY, FIELDS_TO_BE_SORT, rows_start, rows_per_page); Next is just for you to try; these are only the major fields you need to know, others are ordinary codes which we use everyday. I know there are still improvements that can be made to this article, so please feel free to leave your comments. Points of Interest I hope I did share a good article with you guys! You can contact me at [email protected] History • [2009.05.27] - Tom Bauto { version 1.0.0 } License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Share About the Author Tom Bauto Software Developer (Senior) RealPage, Inc. Philippines Philippines I am very passionate about software development My daily interest is to contribute on innovations.   Let's collaborate, let me know at [email protected] Comments and Discussions   GeneralMy vote of 1 PinmemberMark Nischalke5-Jun-09 8:14  General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin    Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | Advertise | Privacy | Terms of Use | Mobile Web01 | 2.8.141220.1 | Last Updated 21 Jul 2009 Article Copyright 2009 by Tom Bauto Everything else Copyright © CodeProject, 1999-2014 Layout: fixed | fluid
__label__pos
0.687119
/[gentoo-x86]/eclass/vdr-plugin.eclass Gentoo Contents of /eclass/vdr-plugin.eclass Parent Directory Parent Directory | Revision Log Revision Log Revision 1.23 - (show annotations) (download) Sat Jun 17 20:35:02 2006 UTC (11 years, 1 month ago) by zzam Branch: MAIN Changes since 1.22: +5 -3 lines Only exec dodoc on existing files. Prevents warnings, see bug #137100, thanks to Jon Hood <[email protected]> for reporting. 1 # Copyright 1999-2005 Gentoo Foundation 2 # Distributed under the terms of the GNU General Public License v2 3 # $Header: /var/cvsroot/gentoo-x86/eclass/vdr-plugin.eclass,v 1.22 2006/06/17 14:51:32 zzam Exp $ 4 # 5 # Author: 6 # Matthias Schwarzott <[email protected]> 7 8 # vdr-plugin.eclass 9 # 10 # eclass to create ebuilds for vdr plugins 11 # 12 13 # Example ebuild (vdr-femon): 14 # 15 # inherit vdr-plugin 16 # IUSE="" 17 # SLOT="0" 18 # DESCRIPTION="vdr Plugin: DVB Frontend Status Monitor (signal strengt/noise)" 19 # HOMEPAGE="http://www.saunalahti.fi/~rahrenbe/vdr/femon/" 20 # SRC_URI="http://www.saunalahti.fi/~rahrenbe/vdr/femon/files/${P}.tgz" 21 # LICENSE="GPL-2" 22 # KEYWORDS="~x86" 23 # DEPEND=">=media-video/vdr-1.3.27" 24 # 25 # 26 27 # Installation of a config file for the plugin 28 # 29 # If ${VDR_CONFD_FILE} is set install this file 30 # else install ${FILESDIR}/confd if it exists. 31 32 # Gets installed as /etc/conf.d/vdr.${VDRPLUGIN}. 33 # For the plugin vdr-femon this would be /etc/conf.d/vdr.femon 34 35 36 # Installation of an rc-addon file for the plugin 37 # 38 # If ${VDR_RCADDON_FILE} is set install this file 39 # else install ${FILESDIR}/rc-addon.sh if it exists. 40 # 41 # Gets installed under ${VDR_RC_DIR}/plugin-${VDRPLUGIN}.sh 42 # (in example vdr-femon this would be /usr/lib/vdr/rcscript/plugin-femon.sh) 43 # 44 # This file is sourced by the startscript when plugin is activated in /etc/conf.d/vdr 45 # It could be used for special startup actions for this plugins, or to create the 46 # plugin command line options from a nicer version of a conf.d file. 47 48 inherit base multilib eutils flag-o-matic 49 50 IUSE="debug" 51 52 # Name of the plugin stripped from all vdrplugin-, vdr- and -cvs pre- and postfixes 53 VDRPLUGIN="${PN/#vdrplugin-/}" 54 VDRPLUGIN="${VDRPLUGIN/#vdr-/}" 55 VDRPLUGIN="${VDRPLUGIN/%-cvs/}" 56 57 DESCRIPTION="vdr Plugin: ${VDRPLUGIN} (based on vdr-plugin.eclass)" 58 59 # works in most cases 60 S="${WORKDIR}/${VDRPLUGIN}-${PV}" 61 62 # depend on headers for DVB-driver 63 RDEPEND="" 64 DEPEND="media-tv/linuxtv-dvb-headers" 65 66 67 # this code is from linux-mod.eclass 68 update_vdrplugindb() { 69 local VDRPLUGINDB_DIR=${ROOT}/var/lib/vdrplugin-rebuild/ 70 71 if [[ ! -f ${VDRPLUGINDB_DIR}/vdrplugindb ]]; then 72 [[ ! -d ${VDRPLUGINDB_DIR} ]] && mkdir -p ${VDRPLUGINDB_DIR} 73 touch ${VDRPLUGINDB_DIR}/vdrplugindb 74 fi 75 if [[ -z $(grep ${CATEGORY}/${PN}-${PVR} ${VDRPLUGINDB_DIR}/vdrplugindb) ]]; then 76 einfo "Adding plugin to vdrplugindb." 77 echo "a:1:${CATEGORY}/${PN}-${PVR}" >> ${VDRPLUGINDB_DIR}/vdrplugindb 78 fi 79 } 80 81 remove_vdrplugindb() { 82 local VDRPLUGINDB_DIR=${ROOT}/var/lib/vdrplugin-rebuild/ 83 84 if [[ -n $(grep ${CATEGORY}/${PN}-${PVR} ${VDRPLUGINDB_DIR}/vdrplugindb) ]]; then 85 einfo "Removing ${CATEGORY}/${PN}-${PVR} from vdrplugindb." 86 sed -ie "/.*${CATEGORY}\/${P}.*/d" ${VDRPLUGINDB_DIR}/vdrplugindb 87 fi 88 } 89 90 vdr-plugin_pkg_setup() { 91 # -fPIC is needed for shared objects on some platforms (amd64 and others) 92 append-flags -fPIC 93 use debug && append-flags -g 94 95 # Where should the plugins live in the filesystem 96 VDR_PLUGIN_DIR="/usr/$(get_libdir)/vdr/plugins" 97 VDR_CHECKSUM_DIR="${VDR_PLUGIN_DIR%/plugins}/checksums" 98 99 # transition to /usr/share/... will need new vdr-scripts version stable 100 VDR_RC_DIR="/usr/lib/vdr/rcscript" 101 102 # Pathes to includes 103 VDR_INCLUDE_DIR="/usr/include" 104 DVB_INCLUDE_DIR="/usr/include" 105 106 107 VDRVERSION=$(awk -F'"' '/define VDRVERSION/ {print $2}' ${VDR_INCLUDE_DIR}/vdr/config.h) 108 APIVERSION=$(awk -F'"' '/define APIVERSION/ {print $2}' ${VDR_INCLUDE_DIR}/vdr/config.h) 109 [[ -z ${APIVERSION} ]] && APIVERSION="${VDRVERSION}" 110 111 einfo "Building ${PF} against vdr-${VDRVERSION}" 112 einfo "APIVERSION: ${APIVERSION}" 113 } 114 115 vdr-plugin_src_unpack() { 116 [ -z "$1" ] && vdr-plugin_src_unpack unpack patchmakefile 117 118 while [ "$1" ]; do 119 120 case "$1" in 121 unpack) 122 base_src_unpack 123 ;; 124 patchmakefile) 125 if ! cd ${S}; then 126 ewarn "There seems to be no plugin-directory with the name ${S##*/}" 127 ewarn "Perhaps you find one among these:" 128 cd "${WORKDIR}" 129 einfo "$(/bin/ls -1 ${WORKDIR})" 130 die "Could not change to plugin-source-directory!" 131 fi 132 133 ebegin "Patching Makefile" 134 [[ -e Makefile ]] || die "Makefile of plugin can not be found!" 135 cp Makefile Makefile.orig 136 sed -i.orig Makefile \ 137 -e "s:^VDRDIR.*$:VDRDIR = ${VDR_INCLUDE_DIR}:" \ 138 -e "s:^DVBDIR.*$:DVBDIR = ${DVB_INCLUDE_DIR}:" \ 139 -e "s:^LIBDIR.*$:LIBDIR = ${S}:" \ 140 -e "s:^TMPDIR.*$:TMPDIR = ${T}:" \ 141 -e 's:^CXXFLAGS:#CXXFLAGS:' \ 142 -e 's:-I$(VDRDIR)/include:-I$(VDRDIR):' \ 143 -e 's:-I$(DVBDIR)/include:-I$(DVBDIR):' \ 144 -e 's:-I$(VDRDIR) -I$(DVBDIR):-I$(DVBDIR) -I$(VDRDIR):' \ 145 -e 's:$(VDRDIR)/\([a-z]*\.h\|Make.config\):$(VDRDIR)/vdr/\1:' \ 146 -e 's:^APIVERSION = :APIVERSION ?= :' \ 147 -e 's:$(LIBDIR)/$@.$(VDRVERSION):$(LIBDIR)/$@.$(APIVERSION):' \ 148 -e '1i\APIVERSION = '"${APIVERSION}" 149 eend $? 150 ;; 151 esac 152 153 shift 154 done 155 } 156 157 vdr-plugin_copy_source_tree() { 158 cp -r ${S} ${T}/source-tree 159 cd ${T}/source-tree 160 mv Makefile.orig Makefile 161 sed -i Makefile \ 162 -e "s:^DVBDIR.*$:DVBDIR = ${DVB_INCLUDE_DIR}:" \ 163 -e 's:^CXXFLAGS:#CXXFLAGS:' \ 164 -e 's:-I$(DVBDIR)/include:-I$(DVBDIR):' \ 165 -e 's:-I$(VDRDIR) -I$(DVBDIR):-I$(DVBDIR) -I$(VDRDIR):' 166 } 167 168 vdr-plugin_install_source_tree() { 169 einfo "Installing sources" 170 destdir=${VDRSOURCE_DIR}/vdr-${VDRVERSION}/PLUGINS/src/${VDRPLUGIN} 171 insinto ${destdir}-${PV} 172 doins -r ${T}/source-tree/* 173 174 dosym ${VDRPLUGIN}-${PV} ${destdir} 175 } 176 177 vdr-plugin_src_compile() { 178 [ -z "$1" ] && vdr-plugin_src_compile prepare compile 179 180 while [ "$1" ]; do 181 182 case "$1" in 183 prepare) 184 [[ -n "${VDRSOURCE_DIR}" ]] && vdr-plugin_copy_source_tree 185 ;; 186 compile) 187 cd ${S} 188 189 emake ${VDRPLUGIN_MAKE_TARGET:-all} || die "emake failed" 190 ;; 191 esac 192 193 shift 194 done 195 } 196 197 vdr-plugin_src_install() { 198 [[ -n "${VDRSOURCE_DIR}" ]] && vdr-plugin_install_source_tree 199 cd ${S} 200 201 if [[ -n ${VDR_MAINTAINER_MODE} ]]; then 202 local mname=${P}-Makefile 203 cp Makefile ${mname}.patched 204 cp Makefile.orig ${mname}.before 205 206 diff -u ${mname}.before ${mname}.patched > ${mname}.diff 207 208 insinto "/usr/share/vdr/maintainer-data/makefile-changes" 209 doins ${mname}.diff 210 211 insinto "/usr/share/vdr/maintainer-data/makefile-before" 212 doins ${mname}.before 213 214 insinto "/usr/share/vdr/maintainer-data/makefile-patched" 215 doins ${mname}.patched 216 217 fi 218 219 insinto "${VDR_PLUGIN_DIR}" 220 doins libvdr-*.so.* 221 local docfile 222 for docfile in README* HISTORY CHANGELOG; do 223 [[ -f ${docfile} ]] && dodoc ${docfile} 224 done 225 226 # if VDR_CONFD_FILE is empty and ${FILESDIR}/confd exists take it 227 [[ -z ${VDR_CONFD_FILE} ]] && [[ -e ${FILESDIR}/confd ]] && VDR_CONFD_FILE=${FILESDIR}/confd 228 229 if [[ -n ${VDR_CONFD_FILE} ]]; then 230 insinto /etc/conf.d 231 newins "${VDR_CONFD_FILE}" vdr.${VDRPLUGIN} 232 fi 233 234 235 # if VDR_RCADDON_FILE is empty and ${FILESDIR}/rc-addon.sh exists take it 236 [[ -z ${VDR_RCADDON_FILE} ]] && [[ -e ${FILESDIR}/rc-addon.sh ]] && VDR_RCADDON_FILE=${FILESDIR}/rc-addon.sh 237 238 if [[ -n ${VDR_RCADDON_FILE} ]]; then 239 insinto "${VDR_RC_DIR}" 240 newins "${VDR_RCADDON_FILE}" plugin-${VDRPLUGIN}.sh 241 fi 242 243 244 245 insinto ${VDR_CHECKSUM_DIR} 246 if [[ -f ${ROOT}${VDR_CHECKSUM_DIR}/header-md5-vdr ]]; then 247 newins ${ROOT}${VDR_CHECKSUM_DIR}/header-md5-vdr header-md5-${PN} 248 else 249 if which md5sum >/dev/null 2>&1; then 250 cd ${S} 251 ( 252 cd ${ROOT}${VDR_INCLUDE_DIR}/vdr 253 md5sum *.h libsi/*.h|LC_ALL=C sort --key=2 254 ) > header-md5-${PN} 255 doins header-md5-${PN} 256 fi 257 fi 258 } 259 260 vdr-plugin_pkg_postinst() { 261 update_vdrplugindb 262 einfo 263 einfo "The vdr plugin ${VDRPLUGIN} has now been installed." 264 einfo "To activate execute the following command:" 265 einfo 266 einfo " emerge --config ${PN}" 267 einfo 268 if [[ -n "${VDR_CONFD_FILE}" ]]; then 269 einfo "And have a look at the config-file" 270 einfo "/etc/conf.d/vdr.${VDRPLUGIN}" 271 einfo 272 fi 273 } 274 275 vdr-plugin_pkg_postrm() { 276 remove_vdrplugindb 277 } 278 279 vdr-plugin_pkg_config_final() { 280 diff ${conf_orig} ${conf} 281 rm ${conf_orig} 282 } 283 284 vdr-plugin_pkg_config() { 285 if [[ -z "${INSTALLPLUGIN}" ]]; then 286 INSTALLPLUGIN="${VDRPLUGIN}" 287 fi 288 # First test if plugin is already inside PLUGINS 289 local conf=/etc/conf.d/vdr 290 conf_orig=${conf}.before_emerge_config 291 cp ${conf} ${conf_orig} 292 293 einfo "Reading ${conf}" 294 if ! grep -q "^PLUGINS=" ${conf}; then 295 local LINE=$(sed ${conf} -n -e '/^#.*PLUGINS=/=' | tail -n 1) 296 if [[ -n "${LINE}" ]]; then 297 sed -e ${LINE}'a PLUGINS=""' -i ${conf} 298 else 299 echo 'PLUGINS=""' >> ${conf} 300 fi 301 unset LINE 302 fi 303 304 unset PLUGINS 305 PLUGINS=$(source /etc/conf.d/vdr; echo ${PLUGINS}) 306 307 active=0 308 for p in ${PLUGINS}; do 309 if [[ "${p}" == "${INSTALLPLUGIN}" ]]; then 310 active=1 311 break; 312 fi 313 done 314 315 if [[ "${active}" == "1" ]]; then 316 einfo "${INSTALLPLUGIN} already activated" 317 echo 318 read -p "Do you want to deactivate ${INSTALLPLUGIN} (yes/no) " answer 319 if [[ "${answer}" != "yes" ]]; then 320 einfo "aborted" 321 return 322 fi 323 einfo "Removing ${INSTALLPLUGIN} from active plugins." 324 local LINE=$(sed ${conf} -n -e '/^PLUGINS=.*\<'${INSTALLPLUGIN}'\>/=' | tail -n 1) 325 sed -i ${conf} -e ${LINE}'s/\<'${INSTALLPLUGIN}'\>//' \ 326 -e ${LINE}'s/ \( \)*/ /g' \ 327 -e ${LINE}'s/ "/"/g' \ 328 -e ${LINE}'s/" /"/g' 329 330 vdr-plugin_pkg_config_final 331 return 332 fi 333 334 335 einfo "Adding ${INSTALLPLUGIN} to active plugins." 336 local LINE=$(sed ${conf} -n -e '/^PLUGINS=/=' | tail -n 1) 337 sed -i ${conf} -e ${LINE}'s/^PLUGINS=" *\(.*\)"/PLUGINS="\1 '${INSTALLPLUGIN}'"/' \ 338 -e ${LINE}'s/ \( \)*/ /g' \ 339 -e ${LINE}'s/ "/"/g' \ 340 -e ${LINE}'s/" /"/g' 341 342 vdr-plugin_pkg_config_final 343 } 344 345 fix_vdr_libsi_include() 346 { 347 einfo "Fixing include of libsi-headers" 348 local f 349 for f; do 350 sed -i "${f}" \ 351 -e '/#include/s:"\(.*libsi.*\)":<\1>:' \ 352 -e '/#include/s:<.*\(libsi/.*\)>:<vdr/\1>:' 353 done 354 } 355 356 EXPORT_FUNCTIONS pkg_setup src_unpack src_compile src_install pkg_postinst pkg_postrm pkg_config   ViewVC Help Powered by ViewVC 1.1.20  
__label__pos
0.78954
math While adding ten two-digit numbers the digits of one of the numbers were interchanged. As a result the sum of all the ten numbers increased by a value which was four less than that number. Three times the sum of the digits of the original number is ten less than the number. What is the product of the digits of that number? Please help how to obtain that number. 1. 👍 2. 👎 3. 👁 4. ℹ️ 5. 🚩 1. let the 10th digit be interchanged and in form 10x+y therefore original sum => (a1+a2+...+a9) + 10x + y = S but the sum obtained was::: (a1+a2+...+a9) + 10y + x = S' bcoz the digits were interchanged "the sum of all the ten numbers increased by a value which was four less than that number" therefore!! S' - S = (10x+y) - 4 the a1, a2...a9 gets cancelled out & what is left is => 9(y-x)=10x+y-4 => 8y-19x=(-4)-----------------------------... "three times the sum of digits of that original number is ten less than the number" therefore => 3(x+y) = 10x+y - 10 => 2y-7x = -10-------------------------------------... by (1) & (2), the y can be cancelled out by multiplying (2) with -4 & adding (1) & (2) the result we get is => 9x = 36 => x=4 therefore by (2) 2y -7.4= -10 => y=9 therefore x.y = 36 1. 👍 2. 👎 3. ℹ️ 4. 🚩 Respond to this Question First Name Your Response Similar Questions 1. math what is the sum of the digits of the largest even 3 digit number which is not changed when its units and hundreds digit are interchanged 2. maths can you replace the stars with figures **** x 3 ______ ***** the whole calculation uses each of the digits 0-9 once and once only the 4 figure number contains three consecutive numbers which are not in order. the third digit is 3. College Three(3) digit numbers can be formed from the digits 0,1,3,5,7 and 8. How many numbers can be formed without repeats. (A) if the 3 digits numbers must be divisible by 5. (b) if the 3 digits number must be a multiple of 2 4. math find how many six-digit numbers can be formed from the digits 2,3,4,5,6,7 (with repetitions) if a) numbers formed must be even b) the numbers formed must be divisble by 25 c) the odd digits must occupy even position (2nd, 4th, 1. math While adding ten two-digit numbers the digits of one of the numbers were interchanged. As a result the sum of all the ten numbers increased by a value which was four less than that number. Three times the sum of the digits of the 2. math problem My ID number is quite remarkable.Its a 9 digit number with the digits 1-9 appearing only once. The entire 9 digit nuymber is divisible by 9. If you remove the last digit, the remaining 8 digit number is divisible by 8. If you 3. Maths In A Two Digit Number, The Sum Of The Digits Is 14. Twice The Tens Digit Exceeds The Units Digit By One.Find the Numbers. 4. Math If no digit can be used more than once, how many 5-digit numbers can be formed only using the numbers 3,8,1,2,5, and 7? A: 350 numbers B: 717 numbers C: 722 numbers D: 720 numbers I guessed the answer 720 and got it right but I 1. PROBLEM SOLVING IN MATHEMATICS there are two prime numbers between 100 and 199 such that the ten digits is a prime number, the ones digit is a prime number and the tens and ones digits taken together are a 2 digit prime number. find the sum of these 2 prime 2. math Five three digit numbers including N were to be added. While adding, the reverse of N was added by mistake instead of N. Hence the summ increased by 11 times the summ of the digits of N.Eight times the difference of N's units and 3. math three digit numbers are to be formed from digits 0,1,2,3...9 if repetitionis allowed. (a) how many of such numbers can be formed? (b)how manyof the numbers in(a)are divisible by 3 and how many of 9? (c) find the sum of all the 4. balayan national highschool The hundred thousands digits of a six-digit even number is 3 more than the thousand digit, which is twice the ones digit. Give at least four numbers that satisfy the given condition. View more similar questions or ask a new question.
__label__pos
0.999579
Writing apps for USB devices (Windows Store apps using C#/VB/C++) [This article is for Windows 8.x and Windows Phone 8.x developers writing Windows Runtime apps. If you’re developing for Windows 10, see the latest documentation] Windows Runtime in Windows 8.1 provides a new namespace: Windows.Devices.Usb. By using the namespace, you can write a Windows Store app that talks to a custom USB device. "Custom" in this context means, a peripheral device for which Microsoft does not provide an in-box class driver. The official USB specification is the industry standard for hardware manufacturers to make USB peripherals designed for PCs. Windows includes in-box class drivers for most of those devices. For devices that do not have an in-box class drivers, users can install the generic in-box Winusb.sys driver provided by Microsoft. If the driver is Winusb.sys, you can easily write accompanying apps that the user can use to talk to the device. In earlier versions of Windows, such apps were desktop apps that were written by using WinUSB Functions. In Windows 8.1, Windows Store apps can be written by using the new Windows.Devices.Usb namespace. You can use Windows.Devices.Usb, if... • The device driver is the Microsoft-provided Winusb.sys driver. The namespace does not support vendor-supplied device drivers. When you plug in the device, Windows may or may not install that driver automatically, depending on the design of the device. If the driver is not installed automatically, you must do so manually in Device Manager. 1. Right-click the device and select Update Driver Software.... 2. In the wizard, select Browse my computer for driver software. 3. On the next page, select Let me pick from a list of device drivers on my computer. 4. On the next page, from the list, select Universal Serial Bus devices. 5. You should see WinUsb Device. Select it and click Next to install the driver. • You provide the information about your device as device capability declarations in the App Manifest. This allows the app to get associated with the device. For more information, see How to add USB device capabilities to the app manifest. • The device belongs to one of device classes supported by the namespace. Note that a custom device can belong to a pre-defined USB device class or its functionality could be defined by the vendor. Use the namespace for these USB device class, subclass, and protocol codes: • CDC control class (class code: 0x02, subclass code: any, protocol code: any) • Physical class (class code: 0x05, subclass code: any, protocol code: any) • PersonalHealthcare class (class code: 0x0f, subclass code: 0x00, protocol code: 0x00) • ActiveSync class (class code: 0xef, subclass code: 0x01, protocol code: 0x01) • PalmSync class (class code: 0xef, subclass code: 0x01, protocol code: 0x02) • DeviceFirmwareUpdate class (class code: 0xfe, subclass code: 0x01, protocol code: 0x01) • IrDA class (class code: 0, subclass code: 0x02, protocol code: 0x00) • Measurement class (class code: 0xfe, subclass code: 0x03, protocol code: any) • Vendor-specific class (class code: 0xff, subclass code: any, protocol code: any) Do not use the namespace for these USB device classes: Note Instead, use other relevant APIs. These USB device classes are blocked by the namespace to prevent conflict with other APIs. For example, if your device conforms to HID protocol, use Windows.Devices.HumanInterfaceDevice. • Audio class (0x01) • HID class(0x03) • Image class (0x06) • Printer class (0x07) • Mass storage class (0x08) • Smart card class (0x0B) • Audio/video class (0x10) • Wireless controller (such as, wireless USB host/hub) (0xE0) You should not use Windows.Devices.Usb, if... • Your app wants to access internal devices. Windows.Devices.Usb is designed for accessing peripheral devices. A Windows Store app can access internal USB devices only if it is a privileged app that is explicitly declared by the OEM for that system. • Your app is a Control Panel app. Apps using the namespace must be per-user apps. The app can communicate with the device but cannot save setting data outside its scope, a functionality required by many Control Panel apps. The code examples in this topic show common tasks that your app can perform by using the namespace. The examples are in C#. • Connect to the USB device • Send control transfers • Get device information • Send or receive interrupt data • Send or receive bulk data • Change the interface alternate setting of the device • Step-by-step tutorial • Windows Store app sample for accessing USB devices For more information about features and limitations, see these frequently asked questions. Connect to the USB device In your app, the first mandatory task is to search the device in the system by providing information that identifies the device, such as the hardware Id, a device interface GUID, or if you are searching by device class information, by providing device subclass or protocol codes. The search can return more than one device. The namespace has been designed to refine your search query with extra information so that the result set is smaller. From that set, you must select the device you want, obtain a reference to it, and open the device for communication. This code example demonstrates important calls to search the device and connect to it. var deviceQueryString = UsbDevice.GetDeviceSelector(deviceVid, devicePid, deviceInterfaceClass); var myDevices = await Windows.Devices.Enumeration.DeviceInformation.FindAllAsync(deviceQueryString, null); UsbDevice device = await UsbDevice.FromIdAsync(myDevices[0].Id); // Device is in use. if (device != null) { MainPage.Current.NotifyUser("Device " + id + " opened", NotifyType.StatusMessage); } else { MainPage.Current.NotifyUser("Unable to open device : " + id, NotifyType.ErrorMessage); } Send control transfers An app sends several control transfers requests that can read (IN transfer) or write (OUT transfer) configuration information or perform device-specific functions defined by the hardware vendor. If the transfer performs a write operation, it's an OUT transfer; a read operation, it's an IN transfer. Regardless of the direction, your app always builds and initiates the request for the transfer. This code example shows how to send control transfer that reads information from the device. async Task<IBuffer> SendVendorControlTransferInToDeviceRecipientAsync(Byte vendorCommand, UInt32 dataPacketLength) { // Data will be written to this buffer when we receive it var buffer = new Windows.Storage.Streams.Buffer(dataPacketLength); UsbSetupPacket setupPacket = new UsbSetupPacket { RequestType = new UsbControlRequestType { Direction = UsbTransferDirection.In, Recipient = UsbControlRecipient.Device.DefaultInterface, ControlTransferType = UsbControlTransferType.Vendor, }, Request = vendorCommand, Length = dataPacketLength }; return await device.SendControlInTransferAsync(initSetupPacket, buffer); } Get device information A USB device provides information about itself in data structures called USB descriptors. The namespace provides classes that you can use to get various USB descriptors by simply accessing property values. This code example shows how to get the USB device descriptor. public String GetDeviceDescriptorAsString() { String content = null; var deviceDescriptor = DeviceList.Current.CurrentDevice.DeviceDescriptor; content = "Device Descriptor\n" + "\nUsb Spec Number : 0x" + deviceDescriptor.BcdUsb.ToString("X4", NumberFormatInfo.InvariantInfo) + "\nMax Packet Size (Endpoint 0) : " + deviceDescriptor.MaxPacketSize0.ToString("D", NumberFormatInfo.InvariantInfo) + "\nVendor ID : 0x" + deviceDescriptor.IdVendor.ToString("X4", NumberFormatInfo.InvariantInfo) + "\nProduct ID : 0x" + deviceDescriptor.IdProduct.ToString("X4", NumberFormatInfo.InvariantInfo) + "\nDevice Revision : 0x" + deviceDescriptor.BcdDeviceRevision.ToString("X4", NumberFormatInfo.InvariantInfo) + "\nNumber of Configurations : " + deviceDescriptor.NumberOfConfigurations.ToString("D", NumberFormatInfo.InvariantInfo); return content; } Send or receive interrupt data Your app can write or read data from interrupt endpoints. This involves registering an event handler. Each time the device generates an interrupt, data associated with the interrupt is read into the interrupt endpoint. At that time, the event handler is invoked and your app can access data. This code example shows how to get interrupt data. void RegisterForInterruptEvent(UsbDevice CurrentDevice, UInt32 pipeIndex) { if (!registeredInterrupt) { // Search for the correct pipe that has the specified endpoint number var interruptInPipe = CurrentDevice.DefaultInterface.InterruptInPipes[(int) pipeIndex]; registeredInterrupt = true; registeredInterruptPipeIndex = pipeIndex; TypedEventHandler<UsbInterruptInPipe, UsbInterruptInEventArgs> interruptEventHandler = new TypedEventHandler<UsbInterruptInPipe, UsbInterruptInEventArgs>(this.OnGeneralInterruptEvent); interruptInPipe.DataReceived += interruptEventHandler; } } void UnregisterFromInterruptEvent(UsbDevice CurrentDevice) { if (registeredInterrupt) { // Search for the correct pipe that we know we used to register events var interruptInPipe =CurrentDevice.DefaultInterface.InterruptInPipes[(int)registeredInterruptPipeIndex]; interruptInPipe.DataReceived -= interruptEventHandler; registeredInterrupt = false; } } async void OnGeneralInterruptEvent(UsbInterruptInPipe sender, UsbInterruptInEventArgs eventArgs) { numInterruptsReceived++; // The data from the interrupt IBuffer buffer = eventArgs.InterruptData; // Create a DispatchedHandler for the because we are interracting with the UI directly and the // thread that this function is running on may not be the UI thread; if a non-UI thread modifies // the UI, an exception is thrown await rootPage.Dispatcher.RunAsync( CoreDispatcherPriority.Normal, new DispatchedHandler(() => { MainPage.Current.NotifyUser( "Number of interrupt events received: " + numInterruptsReceived.ToString() + "\nReceived " + buffer.Length.ToString() + " bytes", NotifyType.StatusMessage); })); } Send or receive bulk data Your app can send or receive large amounts of data through bulk transfers. Bulk data can take long time to complete depending on the traffic on the bus. However, data delivery is guaranteed. Your app can initiate these transfers and even modify the way the data buffer is sent or received by setting various policy properties. By using the cancellationTokenSource, you can cancel pending requests. After those requests are canceled, the app receives an OperationCanceled exception. For more information, see Cancellation in managed threads. This code example shows how to write data to the device by using bulk transfer. async void BulkWriteAsync(UsbDevice CurrentDevice, UInt32 bulkPipeIndex, UInt32 bytesToWrite) { var arrayBuffer = new Byte[bytesToWrite]; var stream = CurrentDevice.DefaultInterface.BulkOutPipes[(int) bulkPipeIndex].OutputStream; //Initialize the buffer. Not shown. var writer = new DataWriter(stream); writer.WriteBytes(arrayBuffer); runningWriteTask = true; UInt32 bytesWritten = await writer.StoreAsync().AsTask(cancellationTokenSource.Token); runningWriteTask = false; totalBytesWritten += bytesWritten; } Change the interface alternate setting of the device USB devices are configured such that the endpoint buffers (that hold transfer data) are grouped in alternate settings. The device can have many settings but only one can be active at a time. Data transfers can take place to or from the endpoints of the active setting. If the app wants to use other endpoints, it can change the setting and endpoints of that setting become available for transfers. This code example shows how to get the active setting and select an alternate setting. async void SetInterfaceSetting(UsbDevice CurrentDevice, Byte settingNumber) { var interfaceSetting = CurrentDevice.DefaultInterface.InterfaceSettings[settingNumber]; await interfaceSetting.SelectSettingAsync(); MainPage.Current.NotifyUser("Interface Setting is set to " + settingNumber, NotifyType.StatusMessage); } void GetInterfaceSetting(UsbDevice CurrentDevice) { var interfaceSettings = CurrentDevice.DefaultInterface.InterfaceSettings; foreach(UsbInterfaceSetting interfaceSetting in interfaceSettings) { if (interfaceSetting.Selected) { Byte interfaceSettingNumber = interfaceSetting.InterfaceDescriptor.AlternateSettingNumber; MainPage.Current.NotifyUser("Interface setting:" + interfaceSettingNumber.ToString("D", NumberFormatInfo.InvariantInfo), NotifyType.StatusMessage); break; } } } Step-by-step tutorial For step-by-step instructions about using these APIs to perform common tasks for communicating with a USB device, see Talking to USB devices, start to finish (Windows Store app). Windows Store app sample for accessing USB devices Get started on writing a Windows Store app by studying these Windows Store app samples for USB. .
__label__pos
0.721242
What is BitV (What does Bit mean?) What is BitV According to official sources, BitV is the next generation upgrade What is BitV (What does Bit mean?) What is BitV According to official sources, BitV is the next generation upgrade version of the Bitcoin core protocol. BitV is a blockchain layered architecture based on POS consensus algorithm and distributed ledger technology, built using DAG (Digital Asset Base) encryption technology. Its main features include security, scalability, and high throughput without sacrificing decentralization; optimizing transaction speed and reducing costs by demand for block space. Compared to traditional POW mechanism (proof-of-work mining), this novel design has stronger resilience and anti-ASIC capabilities: allowing anyone to easily run their own node using the PoS mechanism; and because of its unique functionality similar to POS, it does not affect the security of the entire network. What does Bit mean? Editor’s note: This article comes from the Fenghuolun Community (ID: FHBT18), author: Pepe, Odaily Planet Daily authorized reprint. Hello everyone, I am Pepe. Bit means the combination of mining machines and Bitcoin, also known as “bit” in English, which simply means “Bit.” This term was invented by American mathematician and cryptography expert Edward Hathaway as a concept to explain whether the encryption algorithm in the proof-of-work consensus mechanism of Bitcoin can be integrated with the blockchain. It is a protocol within a digital currency system aimed at ensuring transaction security and price stability by centralizing the computational power of block producers. It can also be referred to as “Bit” or “BTC”, indicating that any coin generated using a certain technology must be added to the network in a certain amount to be called “Bit”; it refers to the common mining hardware: “ASIC” (Application-Specific Integrated Circuit). So why is there such a definition? Because this is actually a relatively complex vocabulary system, so for ordinary people, it may not be very deeply understood. In fact, the operation of the Bitcoin peer-to-peer electronic cash system is more like an application that a company or organization uses to manage its own accounts and other data. Therefore, in many cases, “bit” translated into the Chinese meaning of Bitcoin is to add an address to a separate wallet. If someone wants to own a million bitcoins, they can choose to transfer money to a part of it, but how much amount is required depends on the price of Bitcoin. Of course, when we talk about “bt” here, it does not mean that all bitcoins are the same, but it includes virtual units composed of the same string of code: for example, the original creator of Bitcoin, Whitfield Diffie, the author of the Bitcoin white paper, and a former Google engineer named David Chaum, who respectively serve as the CEO/CTO of the project, and have responsibilities for maintaining servers, developing software, providing cloud storage services, and even specifically working on open source projects. Theoretically, “Bitcoin is a decentralized computer system”, and because it is based on a cryptographic distributed database structure, its design ideas are in line with the characteristics of the current Internet era-users can participate directly in running smart contracts without the need for intermediaries. In addition, what happens when you purchase a device? First, you need to determine whether your device is installed with a new version of a computer with a mnemonic, or if you have deployed some new machines. This way, parameters can be automatically set according to your operating habits to ensure the security of the device. The second important thing is, if you don’t have friends who have installed new machines, it is recommended not to try using old graphics cards again! This article and pictures are from the Internet and do not represent aiwaka's position. If you infringe, please contact us to delete:https://www.aiwaka.com/2023/08/17/what-is-bitv-what-does-bit-mean/ It is strongly recommended that you study, review, analyze and verify the content independently, use the relevant data and content carefully, and bear all risks arising therefrom.
__label__pos
0.871428
Members of the KDE Community are recommended to subscribe to the kde-community mailing list at https://mail.kde.org/mailman/listinfo/kde-community to allow them to participate in important discussions and receive other important announcements Commit 625bae01 authored by Marco Martin's avatar Marco Martin preliminar for a new default systemsettings ui sidebar mode: main categories are on a categorized sidebar on the left, main content on the center area parent df4b0260 ......@@ -42,6 +42,7 @@ add_subdirectory(core) add_subdirectory(app) add_subdirectory(categories) add_subdirectory(icons) add_subdirectory(sidebar) add_subdirectory(doc) if(KF5KHtml_FOUND) ...... ......@@ -108,6 +108,7 @@ void SettingsBase::initApplication() for( int pluginsDone = 0; pluginsDone < nbPlugins ; ++pluginsDone ) { KService::Ptr activeService = pluginObjects.at( pluginsDone ); QString error; qWarning()<<"AAA"<<error<<activeService->library(); BaseMode * controller = activeService->createInstance<BaseMode>(this, QVariantList(), &error); if( error.isEmpty() ) { possibleViews.insert( activeService->library(), controller ); ......@@ -388,6 +389,8 @@ void SettingsBase::changeToolBar( BaseMode::ToolBarItems toolbar ) quitBarActions << quitAction; guiFactory()->plugActionList( this, "quit", quitBarActions ); } toolBar()->setVisible(toolbar != BaseMode::NoItems); } void SettingsBase::changeAboutMenu( const KAboutData * menuAbout, QAction * menuItem, QString fallback ) ...... ......@@ -11,7 +11,7 @@ </entry> <entry name="ActiveView" type="String"> <label>Internal name for the view used</label> <default>icon_mode</default> <default>systemsettings_sidebar_mode</default> </entry> </group> </kcfg> set( sidebar_mode_srcs SidebarMode.cpp CategoryDrawer.cpp CategorizedView.cpp ) add_library(systemsettings_sidebar_mode MODULE ${sidebar_mode_srcs}) target_link_libraries(systemsettings_sidebar_mode systemsettingsview KF5::ItemViews KF5::KCMUtils KF5::I18n KF5::KIOWidgets KF5::Service ) install( TARGETS systemsettings_sidebar_mode DESTINATION ${PLUGIN_INSTALL_DIR} ) install( FILES settings-sidebar-view.desktop DESTINATION ${SERVICES_INSTALL_DIR} ) /*************************************************************************** * Copyright (C) 2009 by Rafael Fernández López <[email protected]> * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA * ***************************************************************************/ #include "CategorizedView.h" #include <KFileItemDelegate> #include <QScrollBar> CategorizedView::CategorizedView( QWidget *parent ) : KCategorizedView( parent ) { setWordWrap( true ); setViewportMargins(QMargins(0,0,-20,0)); } void CategorizedView::setModel( QAbstractItemModel *model ) { KCategorizedView::setModel( model ); } void CategorizedView::wheelEvent(QWheelEvent* event) { // this is a workaround because scrolling by mouse wheel is broken in Qt list views for big items // https://bugreports.qt-project.org/browse/QTBUG-7232 verticalScrollBar()->setSingleStep(10); KCategorizedView::wheelEvent(event); } /*************************************************************************** * Copyright (C) 2009 by Rafael Fernández López <[email protected]> * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA * ***************************************************************************/ #ifndef CATEGORIZEDVIEW_H #define CATEGORIZEDVIEW_H #include <KCategorizedView> class CategorizedView : public KCategorizedView { public: CategorizedView( QWidget *parent = 0 ); virtual void setModel( QAbstractItemModel *model ); protected: virtual void wheelEvent(QWheelEvent *); }; #endif /*************************************************************************** * Copyright (C) 2009 by Rafael Fernández López <[email protected]> * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA * ***************************************************************************/ #include "CategoryDrawer.h" #include "MenuProxyModel.h" #include <QPainter> #include <QApplication> #include <QStyleOption> #include <QDebug> CategoryDrawer::CategoryDrawer(KCategorizedView *view) : KCategoryDrawer(view) { } void CategoryDrawer::drawCategory(const QModelIndex &index, int sortRole, const QStyleOption &option, QPainter *painter) const { Q_UNUSED( option ) Q_UNUSED( painter ) Q_UNUSED( sortRole ) painter->setRenderHint(QPainter::Antialiasing); const QRect optRect = option.rect; QFont font(QApplication::font()); font.setBold(true); const QFontMetrics fontMetrics = QFontMetrics(font); const int height = categoryHeight(index, option); const QString category = index.model()->data(index, KCategorizedSortFilterProxyModel::CategoryDisplayRole).toString(); QRect textRect = QRect(option.rect.topLeft(), QSize(option.rect.width() - 2 - 3 - 3, height)); textRect.setLeft(textRect.left()); painter->save(); painter->setFont(font); QColor penColor(option.palette.text().color()); penColor.setAlphaF(0.6); painter->setPen(penColor); if (index.row() > 0) { textRect.setTop(textRect.top() + 10); painter->save(); penColor.setAlphaF(0.3); painter->fillRect(QRect(textRect.topLeft() + QPoint(0, -5), QSize(option.rect.width(),1)), penColor); painter->restore(); } painter->drawText(textRect, Qt::AlignLeft | Qt::AlignTop, category); painter->restore(); } int CategoryDrawer::categoryHeight(const QModelIndex &index, const QStyleOption &option) const { Q_UNUSED( index ); Q_UNUSED( option ); QFont font(QApplication::font()); font.setBold(true); const QFontMetrics fontMetrics = QFontMetrics(font); if (index.row() == 0) return fontMetrics.height(); return fontMetrics.height() * 1.6 /* vertical spacing */; } int CategoryDrawer::leftMargin() const { return 0; } int CategoryDrawer::rightMargin() const { return 0; } /*************************************************************************** * Copyright (C) 2009 by Rafael Fernández López <[email protected]> * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA * ***************************************************************************/ #ifndef CATEGORYDRAWER_H #define CATEGORYDRAWER_H #include <KCategoryDrawer> class QPainter; class QModelIndex; class QStyleOption; class CategoryDrawer : public KCategoryDrawer { Q_OBJECT public: CategoryDrawer(KCategorizedView *view); virtual void drawCategory(const QModelIndex &index, int sortRole, const QStyleOption &option, QPainter *painter) const; virtual int categoryHeight(const QModelIndex &index, const QStyleOption &option) const; virtual int leftMargin() const; virtual int rightMargin() const; }; #endif /************************************************************************** * Copyright (C) 2009 by Ben Cooksley <[email protected]> * * * * This program is free software; you can redistribute it and/or * * modify it under the terms of the GNU General Public License * * as published by the Free Software Foundation; either version 2 * * of the License, or (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the Free Software * * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA * * 02110-1301, USA. * ***************************************************************************/ #include "SidebarMode.h" #include "CategoryDrawer.h" #include "CategorizedView.h" #include "MenuItem.h" #include "MenuModel.h" #include "ModuleView.h" #include "MenuProxyModel.h" #include "BaseData.h" #include <QHBoxLayout> #include <QAction> #include <KAboutData> #include <KStandardAction> #include <KFileItemDelegate> #include <KLocalizedString> #include <KIconLoader> #include <KLineEdit> #include <KServiceTypeTrader> #include <QDebug> K_PLUGIN_FACTORY( SidebarModeFactory, registerPlugin<SidebarMode>(); ) class SidebarMode::Private { public: Private() : categoryDrawer( 0 ), categoryView( 0 ), moduleView( 0 ) {} virtual ~Private() { delete aboutIcon; } KLineEdit * searchText; KCategoryDrawer * categoryDrawer; KCategorizedView * categoryView; QWidget * mainWidget; QHBoxLayout * mainLayout; MenuProxyModel * proxyModel; KAboutData * aboutIcon; ModuleView * moduleView; }; SidebarMode::SidebarMode( QObject *parent, const QVariantList& ) : BaseMode( parent ) , d( new Private() ) { d->aboutIcon = new KAboutData( "SidebarView", i18n( "Sidebar View" ), "1.0", i18n( "Provides a categorized sidebar for control modules." ), KAboutLicense::GPL, i18n( "(c) 2017, Marco Martin" ) ); d->aboutIcon->addAuthor( i18n( "Marco Martin" ), i18n( "Author" ), "[email protected]" ); d->aboutIcon->addAuthor( i18n( "Ben Cooksley" ), i18n( "Author" ), "[email protected]" ); d->aboutIcon->addAuthor( i18n( "Mathias Soeken" ), i18n( "Developer" ), "[email protected]" ); d->aboutIcon->setProgramIconName( "view-sidetree" ); } SidebarMode::~SidebarMode() { delete d; } KAboutData * SidebarMode::aboutData() { return d->aboutIcon; } ModuleView * SidebarMode::moduleView() const { return d->moduleView; } QWidget * SidebarMode::mainWidget() { if( !d->categoryView ) { initWidget(); } return d->mainWidget; } QList<QAbstractItemView*> SidebarMode::views() const { QList<QAbstractItemView*> list; list.append( d->categoryView ); return list; } void SidebarMode::initEvent() { MenuModel * model = new MenuModel( rootItem(), this ); foreach( MenuItem * child, rootItem()->children() ) { model->addException( child ); } d->proxyModel = new MenuProxyModel( this ); d->proxyModel->setCategorizedModel( true ); d->proxyModel->setSourceModel( model ); d->proxyModel->sort( 0 ); d->mainWidget = new QWidget(); d->mainLayout = new QHBoxLayout(d->mainWidget); d->mainLayout->setContentsMargins(0, 0, 0, 0); d->moduleView = new ModuleView( d->mainWidget ); connect( d->moduleView, &ModuleView::moduleChanged, this, &SidebarMode::moduleLoaded ); connect( d->moduleView, &ModuleView::closeRequest, this, &SidebarMode::leaveModuleView ); d->categoryView = 0; } void SidebarMode::searchChanged( const QString& text ) { d->proxyModel->setFilterRegExp( text ); if ( d->categoryView ) { QAbstractItemModel *model = d->categoryView->model(); const int column = d->categoryView->modelColumn(); const QModelIndex root = d->categoryView->rootIndex(); for ( int i = 0; i < model->rowCount(); ++i ) { const QModelIndex index = model->index( i, column, root ); if ( model->flags( index ) & Qt::ItemIsEnabled ) { d->categoryView->scrollTo( index ); break; } } } } void SidebarMode::changeModule( const QModelIndex& activeModule ) { d->moduleView->closeModules(); d->moduleView->loadModule( activeModule ); } void SidebarMode::moduleLoaded() { emit changeToolBarItems(BaseMode::NoItems); } void SidebarMode::initWidget() { // Create the widgets QWidget *sidebar = new QWidget(d->mainWidget); sidebar->setBackgroundRole(QPalette::Base); sidebar->setFixedWidth(250); sidebar->setAutoFillBackground(true); QVBoxLayout *sidebarLayout = new QVBoxLayout(sidebar); sidebarLayout->setSpacing(0); sidebarLayout->setContentsMargins(0, 0, 0, 0); // Initialise search d->searchText = new KLineEdit( sidebar ); d->searchText->setClearButtonShown( true ); d->searchText->setPlaceholderText( i18nc( "Search through a list of control modules", "Search" ) ); d->searchText->setCompletionMode( KCompletion::CompletionPopup ); sidebarLayout->addWidget( d->searchText ); // Prepare the Base Data MenuItem *rootModule = new MenuItem( true, 0 ); initMenuList(rootModule); BaseData::instance()->setMenuItem( rootModule ); connect(d->searchText, &KLineEdit::textChanged, this, &SidebarMode::searchChanged); d->searchText->completionObject()->setIgnoreCase( true ); d->searchText->completionObject()->setItems( BaseData::instance()->menuItem()->keywords() ); d->categoryView = new CategorizedView( sidebar ); sidebarLayout->addWidget( d->categoryView ); d->categoryDrawer = new CategoryDrawer(d->categoryView); d->categoryView->setSelectionMode( QAbstractItemView::SingleSelection ); d->categoryView->setCategoryDrawer( d->categoryDrawer ); d->categoryView->setCategorySpacing(0); d->categoryView->setIconSize(QSize(KIconLoader::SizeSmallMedium, KIconLoader::SizeSmallMedium)); d->categoryView->setVerticalScrollMode(QAbstractItemView::ScrollPerPixel); d->categoryView->setViewMode( QListView::ListMode ); d->categoryView->setMouseTracking( true ); d->categoryView->viewport()->setAttribute( Qt::WA_Hover ); //KFileItemDelegate *delegate = new KFileItemDelegate( d->categoryView ); //delegate->setWrapMode( QTextOption::WordWrap ); //d->categoryView->setItemDelegate( delegate ); d->categoryView->setFrameShape( QFrame::NoFrame ); d->categoryView->setModel( d->proxyModel ); connect( d->categoryView, &QAbstractItemView::activated, this, &SidebarMode::changeModule ); d->mainLayout->addWidget( sidebar ); d->mainLayout->addWidget( d->moduleView ); emit changeToolBarItems(BaseMode::NoItems); d->searchText->setFocus(Qt::OtherFocusReason); } void SidebarMode::initMenuList(MenuItem * parent) { KService::List categories = KServiceTypeTrader::self()->query("SystemSettingsCategory"); KService::List modules = KServiceTypeTrader::self()->query("KCModule", "[X-KDE-System-Settings-Parent-Category] != ''"); // look for any categories inside this level, and recurse into them for (int i = 0; i < categories.size(); ++i) { const KService::Ptr entry = categories.at(i); const QString parentCategory = entry->property("X-KDE-System-Settings-Parent-Category").toString(); const QString parentCategory2 = entry->property("X-KDE-System-Settings-Parent-Category-V2").toString(); if ( parentCategory == parent->category() || // V2 entries must not be empty if they want to become a proper category. ( !parentCategory2.isEmpty() && parentCategory2 == parent->category() ) ) { MenuItem * menuItem = new MenuItem(true, parent); menuItem->setService( entry ); if( menuItem->category() == "lost-and-found" ) { //lostFound = menuItem; continue; } initMenuList( menuItem ); } } KService::List removeList; // scan for any modules at this level and add them for (int i = 0; i < modules.size(); ++i) { const KService::Ptr entry = modules.at(i); const QString category = entry->property("X-KDE-System-Settings-Parent-Category").toString(); const QString category2 = entry->property("X-KDE-System-Settings-Parent-Category-V2").toString(); if( !parent->category().isEmpty() && (category == parent->category() || category2 == parent->category()) ) { // Add the module info to the menu MenuItem * infoItem = new MenuItem(false, parent); infoItem->setService( entry ); removeList.append( modules.at(i) ); } } for (int i = 0; i < removeList.size(); ++i) { modules.removeOne( removeList.at(i) ); } parent->sortChildrenByWeight(); } void SidebarMode::leaveModuleView() { d->moduleView->closeModules(); // We have to force it here } void SidebarMode::giveFocus() { d->categoryView->setFocus(); } #include "SidebarMode.moc" /*************************************************************************** * Copyright (C) 2009 by Ben Cooksley <[email protected]> * * * * This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * This program is distributed in the hope that it will be useful, * * but WITHOUT ANY WARRANTY; without even the implied warranty of * * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * * GNU General Public License for more details. * * * * You should have received a copy of the GNU General Public License * * along with this program; if not, write to the * * Free Software Foundation, Inc., * * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA * ***************************************************************************/ #ifndef SIDEBARMODE_H #define SIDEBARMODE_H #include "BaseMode.h" class ModuleView; class KAboutData; class QModelIndex; class QAbstractItemView; class SidebarMode : public BaseMode { Q_OBJECT public: SidebarMode(QObject * parent, const QVariantList& ); ~SidebarMode(); QWidget * mainWidget(); void initEvent(); void giveFocus(); void leaveModuleView(); KAboutData * aboutData(); ModuleView * moduleView() const; protected: QList<QAbstractItemView*> views() const; public Q_SLOTS: void searchChanged( const QString& text ); private Q_SLOTS: void changeModule( const QModelIndex& activeModule ); void moduleLoaded(); void initWidget(); private: void initMenuList(MenuItem * parent); class Private; Private *const d; }; #endif [Desktop Entry] Icon=view-sidetree Type=Service X-KDE-ServiceTypes=SystemSettingsView X-KDE-Library=systemsettings_sidebar_mode X-KDE-Keywords=System Settings X-KDE-Keywords[ar]=إعدادات النّظام X-KDE-Keywords[bs]=Sistemske postavke X-KDE-Keywords[ca]=Arranjament del sistema X-KDE-Keywords[ca@valencia]=Arranjament del sistema X-KDE-Keywords[cs]=Nastavení systému X-KDE-Keywords[da]=Systemindstillinger X-KDE-Keywords[de]=Systemeinstellungen X-KDE-Keywords[el]=Ρυθμίσεις συστήματος X-KDE-Keywords[en_GB]=System Settings X-KDE-Keywords[eo]=Sistema agordo X-KDE-Keywords[es]=Preferencias del sistema X-KDE-Keywords[et]=Süsteemi seadistused X-KDE-Keywords[eu]=Sistemaren ezarpenak X-KDE-Keywords[fi]=järjestelmä, asetukset X-KDE-Keywords[fr]=Configuration du système X-KDE-Keywords[ga]=Socruithe an Chórais X-KDE-Keywords[gl]=Configuración do sistema X-KDE-Keywords[he]=הגדרות מערכת X-KDE-Keywords[hu]=Rendszerbeállítások X-KDE-Keywords[ia]=Preferentias de systema X-KDE-Keywords[id]=Pengaturan Sistem X-KDE-Keywords[is]=Kerfisstillingar X-KDE-Keywords[it]=Impostazioni di sistema X-KDE-Keywords[ja]=システム設定 X-KDE-Keywords[kk]=System Settings,Жүйе параметрлері X-KDE-Keywords[km]=ការ​កំណត់​ប្រព័ន្ធ​ X-KDE-Keywords[ko]=시스템 설정 X-KDE-Keywords[lt]=Sistemos nuostatos X-KDE-Keywords[lv]=Sistēmas iestatījumi X-KDE-Keywords[mr]=प्रणाली संयोजना X-KDE-Keywords[nb]=Systeminnstillinger X-KDE-Keywords[nds]=Systeeminstellen X-KDE-Keywords[nl]=Systeeminstellingen X-KDE-Keywords[nn]=Systemoppsett X-KDE-Keywords[pa]=ਸਿਸਟਮ ਸੈਟਿੰਗ X-KDE-Keywords[pl]=Ustawienia systemowe X-KDE-Keywords[pt]=Configuração do Sistema X-KDE-Keywords[pt_BR]=Configurações do sistema X-KDE-Keywords[ro]=Configurări de sistem X-KDE-Keywords[ru]=Параметры системы X-KDE-Keywords[sk]=Systémové nastavenia X-KDE-Keywords[sl]=Sistemske nastavitve X-KDE-Keywords[sr]=System Settings,Системске поставке X-KDE-Keywords[sr@ijekavian]=System Settings,Системске поставке X-KDE-Keywords[sr@ijekavianlatin]=System Settings,Sistemske postavke X-KDE-Keywords[sr@latin]=System Settings,Sistemske postavke X-KDE-Keywords[sv]=Systeminställningar X-KDE-Keywords[tg]=Танзимотҳои система X-KDE-Keywords[tr]=Sistem Ayarları X-KDE-Keywords[ug]=سىستېما تەڭشەكلىرى X-KDE-Keywords[uk]=система,параметри,System Settings,системні параметри X-KDE-Keywords[vi]=Thiết lập hệ thống X-KDE-Keywords[x-test]=xxSystem Settingsxx X-KDE-Keywords[zh_CN]=System Settings,系统设置 X-KDE-Keywords[zh_TW]=System Settings Name=Sidebar View Comment=Categorized sidebar style Markdown is supported 0% or You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.864111
Skip to main content Host Information Commands Dig Command in Linux Explained Dig command in Linux is commonly used for retrieving the DNS information of a remote server. Learn how to use the dig command and understand its output. The ‘dig’ command is commonly used among system/network administrators in Linux. It is an acronym for ‘Domain Information Groper’ and it’s intended to query the DNS of a given server and allows to know the answers from the queried domain servers. Let’s see how the command works and how to understand its output: dig command execution dig command execution The very first line outputs the version of the program (9.11.3) and indicates from where to where is the query being launched. In this case, it’s from my ubuntu machine to linuxhandbook.com server. Then it displays the answer obtained by the (domain) server. It displays the address that the name linuxhandbook.com in an A record type is being pointed to. This could be or could not be the IP address of the server, because if something uses a DNS firewall or a “façade” server for security purposes or filter, we would see that first, but this is not the case with linuxhandbook.com server. In many cases, dig is good enough to find the IP address of a website. Lastly, it will give stats about the query, which can be useful if we are assessing the speed involved in the query. OK, but what is the usage or value of the Dig command? Well, in reality, it is useful depending on what type of information you are looking for. Keep in mind you have to know a little bit about DNS first, like what type of DNS records exist and what are they used for. A common example would be to know where a particular domain hosts its emails. In this case: using dig to know what’s the mx record of a domain using dig to know what’s the mx record of a domain We try to ‘dig’ the MX record for the domain microsoft.com, as we would like to know where it is hosted. We see it replies it is: microsoft-com.mail.protection.outlook.com. This is Microsoft’s email protection service, which they use to protect anything coming in and coming out of the domain microsoft.com via email, and this way prevent viruses, trojans, spam, etc. What if I want to know more about an IP? That’s another usage of the ‘dig’ command. If you pass it like this: dig an IP dig an IP You can then know more about a specific IP. In this case, we used linuxhandbook.com’s reported IP with the “-x” and it replied by saying that IP belongs to cloudwayapps.com which is part of Cloudways service, the current hosting company for our linuxhandbook.com website. Multiple digging You can even use it to ‘dig’ several domains at the same time, by simply putting the list of domains you wish to know more information about: dig multiple domains dig multiple domains In conclusion, the ‘dig’ command allows you to basically drill down information about a particular domain and/or IP, and know more about its DNS settings. The combination of options is the most important part and basically, you can always use the man pages for the command to know more about the different operators and what they can give you. I simplified the most common usages but there is plenty to dig for in this command!
__label__pos
0.815295
1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors Dismiss Notice Dismiss Notice Join Physics Forums Today! The friendliest, high quality science and math community on the planet! Everyone who loves science is here! Complex Functions 1. Jun 12, 2008 #1 1. The problem statement, all variables and given/known data Find the limit : [​IMG] [​IMG] 3. The attempt at a solution [​IMG] 1. The problem statement, all variables and given/known data result 3. The attempt at a solution   2. jcsd 3. Jun 12, 2008 #2 Last Qiustion::biggrin: [​IMG]   4. Jun 15, 2008 #3 please wait:zzz:   5. Jun 16, 2008 #4 Dick User Avatar Science Advisor Homework Helper cos(pi/2)+i*sin(pi/2)=i, not i^(1/2). ???   6. Jun 16, 2008 #5 Dick User Avatar Science Advisor Homework Helper But what? Isn't the limit still 1?   7. Jun 17, 2008 #6 By imposing I said let w=i===>w^2=-1   8. Jun 17, 2008 #7 Yes, Find this limit   9. Jun 17, 2008 #8 Dick User Avatar Science Advisor Homework Helper I don't understand that at all.   10. Jun 17, 2008 #9 Dick User Avatar Science Advisor Homework Helper Substitute zbar for z in the power series. What's wrong with that?   11. Jun 17, 2008 #10 12. Jun 17, 2008 #11 Dick User Avatar Science Advisor Homework Helper It doesn't have to be analytical. You've shown using the series (or l'Hopital) that if z_n is a series of complex numbers approaching 0, then sin(z_n)/z_n->1. z_n* is also a series of complex numbers approaching 0. The series expansion holds for ANY z.   13. Jun 17, 2008 #12 Like this [​IMG]   14. Jun 17, 2008 #13 Infact: [​IMG] delta=???   15. Jun 17, 2008 #14 Dick User Avatar Science Advisor Homework Helper Sure. |(sin(z*)/z*|=|sin(z)/z|. Because (sin(z)/z)*=(sin(z*)/z*) and |z|=|z*|. So you don't need analyticity, correct?   16. Jun 17, 2008 #15 Dear: Dick correct 100%. Thank you for answering me Thank you very much:blushing: In Arabic: :biggrin:شكرًا جزيلاً   17. Jun 17, 2008 #16 Dick User Avatar Science Advisor Homework Helper Sure. Sorry, I'm not good at the script. afwan.   18. Jun 17, 2008 #17 O. My Dod afwan:surprised very very Excellent Rather than to learn English You have learned Arabic   19. Jun 17, 2008 #18 Dick User Avatar Science Advisor Homework Helper shukran, afwan, is about as far as I go. Oh, and salam alekum. That's it. I don't even remember how to count, even though this is a math site. So you might want to keep learning english. :)   Last edited: Jun 17, 2008 20. Sep 24, 2008 #19 show that :   Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Complex Functions 1. Complex functions (Replies: 6) 2. Complex function (Replies: 11) 3. The complex function (Replies: 1) 4. [Complex Functions] (Replies: 1) 5. Complex function (Replies: 17) Loading...
__label__pos
0.921924
aboutsummaryrefslogtreecommitdiffstats path: root/Documentation/userspace-api/media/glossary.rst blob: 59a95dba59092178f82acf6ccff89bf5e24a31b2 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 .. SPDX-License-Identifier: GPL-2.0 OR GFDL-1.1-no-invariants-or-later ======== Glossary ======== .. note:: The goal of this section is to standardize the terms used within the media userspace API documentation. This is Work In Progress. .. Please keep the glossary entries in alphabetical order .. glossary:: Bridge Driver A :term:`device driver` that implements the main logic to talk with media hardware. CEC API **Consumer Electronics Control API** An API designed to receive and transmit data via an HDMI CEC interface. See :ref:`cec`. Device Driver Part of the Linux Kernel that implements support for a hardware component. Device Node A character device node in the file system used to control and transfer data in and out of a Kernel driver. Digital TV API **Previously known as DVB API** An API designed to control a subset of the :term:`Media Hardware` that implements digital TV (e. g. DVB, ATSC, ISDB, etc). See :ref:`dvbapi`. DSP **Digital Signal Processor** A specialized :term:`Microprocessor`, with its architecture optimized for the operational needs of digital signal processing. FPGA **Field-programmable Gate Array** An :term:`IC` circuit designed to be configured by a customer or a designer after manufacturing. See https://en.wikipedia.org/wiki/Field-programmable_gate_array. Hardware Component A subset of the :term:`media hardware`. For example an :term:`I²C` or :term:`SPI` device, or an :term:`IP block` inside an :term:`SoC` or :term:`FPGA`. Hardware Peripheral A group of :term:`hardware components <hardware component>` that together make a larger user-facing functional peripheral. For instance, the :term:`SoC` :term:`ISP` :term:`IP block <ip block>` and the external camera sensors together make a camera hardware peripheral. Also known as :term:`peripheral`. I²C **Inter-Integrated Circuit** A multi-master, multi-slave, packet switched, single-ended, serial computer bus used to control some hardware components like sub-device hardware components. See http://www.nxp.com/docs/en/user-guide/UM10204.pdf. IC **Integrated circuit** A set of electronic circuits on one small flat piece of semiconductor material, normally silicon. Also known as chip. IP Block **Intellectual property core** In electronic design a semiconductor intellectual property core, is a reusable unit of logic, cell, or integrated circuit layout design that is the intellectual property of one party. IP Blocks may be licensed to another party or can be owned and used by a single party alone. See https://en.wikipedia.org/wiki/Semiconductor_intellectual_property_core). ISP **Image Signal Processor** A specialized processor that implements a set of algorithms for processing image data. ISPs may implement algorithms for lens shading correction, demosaicing, scaling and pixel format conversion as well as produce statistics for the use of the control algorithms (e.g. automatic exposure, white balance and focus). Media API A set of userspace APIs used to control the media hardware. It is composed by: - :term:`CEC API`; - :term:`Digital TV API`; - :term:`MC API`; - :term:`RC API`; and - :term:`V4L2 API`. See :doc:`index`. MC API **Media Controller API** An API designed to expose and control the relationships between multimedia devices and sub-devices. See :ref:`media_controller`. MC-centric :term:`V4L2 hardware` device driver that requires :term:`MC API`. Such drivers have ``V4L2_CAP_IO_MC`` device_caps field set (see :ref:`VIDIOC_QUERYCAP`). See :ref:`v4l2_hardware_control` for more details. Media Hardware Subset of the hardware that is supported by the Linux Media API. This includes audio and video capture and playback hardware, digital and analog TV, camera sensors, ISPs, remote controllers, codecs, HDMI Consumer Electronics Control, HDMI capture, etc. Microprocessor Electronic circuitry that carries out the instructions of a computer program by performing the basic arithmetic, logical, control and input/output (I/O) operations specified by the instructions on a single integrated circuit. Peripheral The same as :term:`hardware peripheral`. RC API **Remote Controller API** An API designed to receive and transmit data from remote controllers. See :ref:`remote_controllers`. SMBus A subset of I²C, which defines a stricter usage of the bus. SPI **Serial Peripheral Interface Bus** Synchronous serial communication interface specification used for short distance communication, primarily in embedded systems. SoC **System on a Chip** An integrated circuit that integrates all components of a computer or other electronic systems. V4L2 API **V4L2 userspace API** The userspace API defined in :ref:`v4l2spec`, which is used to control a V4L2 hardware. V4L2 Device Node A :term:`device node` that is associated to a V4L driver. The V4L2 device node naming is specified at :ref:`v4l2_device_naming`. V4L2 Hardware Part of the media hardware which is supported by the :term:`V4L2 API`. V4L2 Sub-device V4L2 hardware components that aren't controlled by a :term:`bridge driver`. See :ref:`subdev`. Video-node-centric V4L2 device driver that doesn't require a media controller to be used. Such drivers have the ``V4L2_CAP_IO_MC`` device_caps field unset (see :ref:`VIDIOC_QUERYCAP`). V4L2 Sub-device API Part of the :term:`V4L2 API` which control :term:`V4L2 sub-devices <V4L2 Sub-device>`, like sensors, HDMI receivers, scalers, deinterlacers. See :ref:`v4l2_hardware_control` for more details. Privacy Policy
__label__pos
0.66406
Use lookarounds to eliminate special cases in split The split built-in takes a string and turns it into a list, discarding the separators that you specify as a pattern. This is easy when the separator is simple, but seems hard if the separator gets more tricky. For a simple example, you can split an entry from /etc/password (although getpw* functions will do that for you): root:*:0:0:System Administrator:/var/root:/bin/sh The colons separate the fields, so you split on a colon: my @fields = split /:/, $passwd_line; That works just fine because the separator is a single character, that character is the same between each field, and the separator character doesn’t appear in any of the data. A slightly more tricky example has a character from the separator also show up in the data. Consider comma-separated values which also allows a comma in the data. If you really have to do this, you would use a module (Item 115. Don’t use regular expressions for comma-separated values). However, this is a good task to illustrate some of the tricks in this Item. You might see these data stored in many ways. You are likely to see all the fields quoted if any one of them has the comma: "Buster","Roscoe, Cat","Mimi" You can split on ",", which separates all the fields: my $string = q("Buster","Roscoe, Cat","Mimi"); my @fields = split /","/, $string; $" = "\n"; print "@fields\n"; However, the first and last fields have remnants of the quoting: "Buster Roscoe, Cat Mimi" In this case, the simple split failed because it only removes text between the fields and doesn’t care at all about text at the beginning of the string or the end of the string. You might think that you can make special cases to handle the beginning and end of the string bits. Creating special cases is almost always what you want to avoid: they make the code more complicated and they make you think about more than you really need to think about. Still, you can do that with alternations in the pattern: my $string = q("Buster","Roscoe, Cat","Mimi"); my @fields = split /\A"|","|"\z/, $string; $" = "\n"; print "@fields\n"; And, it doesn’t work. The split maintains leading open fields, so we get an extra field at the start: Buster Roscoe, Cat Mimi You could handle that by removing the first element, but that’s more duct tape and spit over the other kludge. Not only do you have two special cases in the pattern, but you have a special case in the output. You don’t have to remove the quotes right away though. You can reduce all the special cases by not matching the quote characters in the split pattern. You can use a lookaround to find the commas surrounded by quotes: my $string = q("Buster","Roscoe, Cat","Mimi"); my @fields = split /(?<="),(?=")/, $string; $" = "\n"; print "@fields\n"; The positive lookbehind, (?<=...), is a zero-width assertion. It matches a pattern that exists (hence positive) but doesn't consume the characters it matches. You already know about other zero-width assertions, such as \b and ^. These merely match a condition in the string before the pattern. The positive lookahead, (?<=...), is the same thing, but looks forward of the pattern. Now all of the fields retain their quotes because the lookarounds do not consume the characters they match, even though they assert those characters must be there: "Buster" "Roscoe, Cat" "Mimi" You can easily strip off the quotes, handling every element returned by split in the same way: use v5.14; my $string = q("Buster","Roscoe, Cat","Mimi"); my @fields = map { s/\A"|"\Z//gr } split /(?<="),(?=")/, $string; $" = "\n"; print "@fields\n"; The pattern has no special cases, and the output from split has no special cases. Eliminating special cases reduces the number of things you have to remember and the reduces the likelihood that you'll mess up one of the cases. Buster Roscoe, Cat Mimi What if the separator where even more complex, with a literal quote mark inside the data? If you can do that, you can imagine a quote character next to a comma in the field: "Buster","Roscoe "","" Cat","Mimi" Now you want to split on a comma with quotes around it, but only if it doesn't have two consecutive quotes on either side. You can combine the positive lookarounds with negative lookarounds. The negative versions act the same, but assert that the condition cannot match, just like a \B asserts that the position is not a word boundary: use v5.14; my $string = q("Buster","Roscoe "","" Cat","Mimi"); my @fields = map { s/"(?=")//gr } map { s/\A"|"\z//gr } split /(?<!"")(?<="),(?=")(?!"")/, $string; $" = "\n"; print "@fields\n"; In processing the "", you use another positive lookahead to unescape the doubled double quote character: Buster Roscoe "," Cat Mimi As a final example, instead of quoted fields, you might see the non-separator comma as an escaped character: Buster,Roscoe\, Cat,Mimi In this case, you only want to split on a comma that does not have an escape character before it. You can't use a positive lookbehind because you don't want to match characters before the comma. Instead, you want a negative lookbehind because you want to assert that there are characters that can't appear before the comma. Instead of a =, you use a !: use v5.14; my $string = q(Buster,Roscoe\\, Cat,Mimi); my @fields = map { s/\\(?=,)//gr } split /(?<!\\),/, $string; $" = "\n"; print "@fields\n"; Again, you use another positive lookahead, (?=,), in the s/// so you substitution pattern does not match the character that you don't want to replace. Otherwise, you'd have to type the comma twice: s/\\,/,/gr You can go even further with these examples, creating much more ugly and complex examples with additional levels of quoting. This should naturally lead you to believe that regular expressions aren't the best tool for this (or at least a single regular expression). Things to remember • If you really have to parse comma-separated values, use a module instead of writing your own patterns • Lookarounds assert a condition in the string without consuming any characters • The positive lookarounds assert their patterns must match • The negative lookarounds assert their pattern must not match • Use the lookarounds to eliminate special cases in complex split patterns Post to Twitter Post to Delicious Post to Digg Post to Facebook Post to Reddit Leave a comment 0 Comments. Leave a Reply You must be logged in to post a comment.
__label__pos
0.701121
Premium Member Database last update: Thursday, May 24, 2018 6:02:50 GMT-0700 Identifying the Network and Broadcast Address of a Subnet In this lesson we will attempt to simplify the identification of the Network and Broadcast address using a known IP address, within the network or subnet, and the CIDR or Netmask. In this lesson we will walk you through the terms you need to know, the basic math and some examples. Terms you need to know: CIDR: Classless Inter-Domain Routing. Think of it as a replacement for a Netmask. The CIDR Value is equivalent to the number of on bits in a 32 bit address going left to right. For example: the CIDR value of 24 means the first 24 bits are turned on and the last 8 bits are turned off: 11111111.11111111.11111111.00000000. (See RFC's: 1519, 1817, 4632). Network Address (or Network ID): This is the address that identifies the subnet of a host. Broadcast Address: An IP Address that allows information to be sent to all machines on a given subnet rather than a specific machine. (See RFCs: 826, 919, 922, 947, 1027, 1770, 3021). Binary: A base 2 numbering system (machine language). Bitwise AND Operator: Represented by the ?&? symbol, the Bitwise AND Operator returns a one in each bit position if both corresponding bits are one. Example: x & y = z. Binary Inversion: In a Binary CIDR or Netmask we are inverting the ones to zeros and the zeros to ones. Bitwise OR Operator: Represented by the ?|? symbol, the Bitwise OR Operator returns a 1 in each bit position if one or both corresponding bits are one. The Steps to identify the Network and Broadcast Address of a Subnet Convert the IP Address and CIDR (or Netmask) to binary. In our lesson entitled Decimal and Binary Conversion of IP Addresses we gave you the tools to convert any IP to Binary. If you need additional help you can try our handy IP Conversion Calculators. Use a Bitwise AND (IP & CIDR) Operator to return the corresponding values of the IP and CIDR addresses. This gives you the Network Address (Network ID) A simple way to use the Bitwise AND Operator in Binary is show in the following example: IP Address: 192.168.1.15 CIDR: 24 (Netmask: 255.255.255.0) Binary IP Address: 11000000.10101000.00000001.00001111 Binary CIDR: 11111111.11111111.11111111.00000000 Using the Bitwise AND (&) Operator, compare the Binary IP Address to the Binary CIDR Address. The result will be the Network Address of the IP Address we are using: Binary IP: 11000000.10101000.00000001.00001111 Binary CIDR: 11111111.11111111.11111111.00000000 Binary Network: 11000000.10101000.00000001.00000000 The resultant Network Address is 11000000.10101000.00000001.00000000. Converting this back to the format of an IPv4 Address gives us 192.168.1.0. This is our Network Address. Therefore, 192.168.1.15 belongs to the 192.168.1.0/24 network. To get the Broadcast Address we need to do a Binary inversion of the CIDR or Netmask Address. The inversion of the CIDR Address of 11111111.11111111.11111111.00000000 becomes: 00000000.00000000.00000000.11111111. Now we use the Bitwise OR Operator on the Binary Network Address and the inverted CIDR Address to get the Broadcast address. Binary Network Address: 11000000.10101000.00000001.00000000 Inverted Binary CIDR: 00000000.00000000.00000000.11111111 Binary Broadcast Address: 11000000.10101000.00000001.11111111 We now convert 11000000.10101000.00000001.11111111 to IPv4 Decimal octet: 192.168.1.255. The Broadcast Address for the 192.168.1.0/24 Subnet is 192.168.1.255. Now that you have your feet wet, let's try a few more. Identify the Network and Broadcast Addresses for each of the following examples: 1. 10.10.1.97/23 2. 192.168.0.3/25 3. 172.16.5.34/26 4. 192.168.11.17/28 Example one: Convert 10.10.1.97/23 to Binary. IP Address: 00001010.00001010.00000001.01100001 CIDR Address: 11111111.11111111.11111110.00000000 Use Bitwise AND Operator (IP & CIDR): IP Address: 00001010.00001010.00000001.01100001 CIDR Address: 11111111.11111111.11111110.00000000 Network Address: 00001010.00001010.00000000.00000000 Network Address: 10.10.0.0 Binary Inversion of CIDR: Binary CIDR: 11111111.11111111.11111110.00000000 Inverted Binary CIDR: 00000000.00000000.00000001.11111111 Use Bitwise OR Operator to get the Broadcast Address: Binary Network: 00001010.00001010.00000000.00000000 Inverted Binary CIDR: 00000000.00000000.00000001.11111111 Binary Broadcast: 00001010.00001010.00000001.11111111 Broadcast Address: 10.10.1.255 IP Address 10.10.1.97/23 belongs to the 10.10.0.0/23 Network. The network Address is 10.10.0.0 and the Broadcast Address is 10.10.1.255. Example two: Convert 192.168.0.3/25 to Binary. IP Address: 11000000.10101000.00000000.00000011 CIDR Address: 11111111.11111111.11111111.10000000 Use Bitwise AND Operator (IP & CIDR): IP: 11000000.10101000.00000000.00000011 CIDR: 11111111.11111111.11111111.10000000 Network: 11000000.10101000.00000000.00000000 Network Address: 192.168.0.0 Binary Inversion of CIDR: Binary CIDR: 11111111.11111111.11111111.10000000 Inverted Binary CIDR: 00000000.00000000.00000000.01111111 Use Bitwise OR Operator to get the Broadcast Address: Binary Network: 11000000.10101000.00000000.00000000 Inverted Binary CIDR: 00000000.00000000.00000000.01111111 Binary Broadcast: 11000000.10101000.00000000.01111111 Broadcast Address: 192.168.0.127 IP Address 192.168.0.3/25 belongs to the 192.168.0.0/25 Network. The network Address is 192.168.0.0 and the Broadcast Address is 192.168.0.127. Example three: Convert 172.16.5.34/26 to Binary. IP Address: 11000000.10101000.00000000.00000011 CIDR Address: 11111111.11111111.11111111.10000000 Use Bitwise AND Operator (IP & CIDR): IP: 10101100.00010000.00000101.00100010 CIDR: 11111111.11111111.11111111.11000000 Network: 10101100.00010000.00000101.00000000 Network Address: 172.16.5.0 Binary Inversion of CIDR: Binary CIDR: 11111111.11111111.11111111.11000000 Inverted Binary CIDR: 00000000.00000000.00000000.00111111 Use Bitwise OR Operator to get the Broadcast Address: Binary Network: 10101100.00010000.00000101.00000000 Inverted Binary CIDR: 00000000.00000000.00000000.00111111 Binary Broadcast: 10101100.00010000.00000101.00111111 Broadcast Address: 172.16.5.63 IP Address 172.16.5.34/26 belongs to the 172.16.5.0/26 Network. The network Address is 172.16.5.0 and the Broadcast Address is 172.16.5.63. Example four: Convert 192.168.11.17/28 to Binary. IP Address: 11000000.10101000.00001011.00010001 CIDR Address: 11111111.11111111.11111111.11110000 Use Bitwise AND Operator (IP & CIDR): IP: 11000000.10101000.00001011.00010001 CIDR: 11111111.11111111.11111111.11110000 Network: 11000000.10101000.00001011.00010000 Network Address: 192.168.11.16 Binary Inversion of CIDR: Binary CIDR: 11111111.11111111.11111111.11110000 Inverted Binary CIDR: 00000000.00000000.00000000.00001111 Use Bitwise OR Operator to get the Broadcast Address: Binary Network: 11000000.10101000.00001011.00010000 Inverted Binary CIDR: 00000000.00000000.00000000.00001111 Binary Broadcast: 11000000.10101000.00001011.00011111 Broadcast Address: 192.168.11.31 IP Address 192.168.11.17/28 belongs to the 192.168.11.16/28 Network. The network Address is 192.168.11.16 and the Broadcast Address is 192.168.11.31.
__label__pos
0.767744
Results 1 to 1 of 1 Thread: Asymmetric Travelling Salesman Problem - Particular case 1. #1 Newbie Joined Nov 2011 Posts 1 Asymmetric Travelling Salesman Problem - Particular case Hello, First of all I would like to inform you that I am an IT professional and not a Maths expert so some of my vocabulary may be "mathematically incorrect". I need to solve a particular case of the Asymmetric Travelling Salesman Problem (ATSP from now on), and I definitely need help of a Math expert on this one. I will start with an example right ahead so the problem becomes immediately clear to everyone. Let's suppose the following graph, where the first letter is a node in the graph, the second letter is another node, and the number is the cost from travelling between these two nodes: A B 30 A C 30 B C 1 B D 1 C B 200 D A 1 As I stated before, the graph is asymmetric, ie. having two different nodes X and Y, the cost from X to Y may be diferent from the cost of Y to X. In fact this can be observed in the example nodes B and C (B to C costs 1 and C to B costs 200). Another interesting fact is that not all the nodes have edges that connect each other, so my first approach was to build the missing edges finding the shortest path between the nodes who's edges are missing. The graph became then: A B 30 A C 30 A D 31 B A 2 B C 1 B D 1 C A 202 C B 200 C D 201 D A 1 D B 31 D C 31 Now I can present the first requirement of the TSP I'm trying to solve. The general TSP states that we must visit each node only once and we must finish in the node we started. In my case, I must visit a node at least once, and not necessarilly just once. I think this is a basic assumption of the Asymmetric TSP (or not, but I am not an expert on the subject). Continuing with the example I applied the Hungarian algorithm (or method) to find all the possible routes that visit a node at least once, and got the following routes: A -> C -> B -> D -> A : cost 30 + 200 + 1 + 1 = 232 B -> D -> A -> C -> B : cost 1 + 1 + 30 + 200 = 232 C -> B -> D -> A -> C : cost 200 + 1 + 1 + 30 = 232 D -> A -> C -> B -> D : cost 1 + 30 + 200 + 1 = 232 Another requirement of the TSP I need to solve is that the salesman doesn't need to finish in the same node where he started. I concluded that If I get the paths between the last nodes visits, I get the shortest paths on the TSP, where the rule "not finishing where he started" is true. So, applying this rule on the example, this is equivalent to removing the first step. So the paths now become: C -> B -> D -> A : cost 200 + 1 + 1 = 202 D -> A -> C -> B : cost 1 + 30 + 200 = 231 B -> D -> A -> C : cost 1 + 1 + 30 = 32 A -> C -> B -> D : cost 30 + 200 + 1 = 231 So now we can conclude that the shortest path of this ATSP with the requirements I need is B -> D -> A -> C with the cost of 32. Now there is a final requirement, and it's here I need help of a Maths expert. The cost of a node, is only counted once, ie. if we travel from a node X to Y, the cost of that move is only valid for the first time. The second time we travel between X and Y again, the cost will be zero. So, let's go again to the example, and focus only in the path where we started at A: A -> C -> B -> D : cost 30 + 200 + 1 = 231 The path I need to get from starting at A should NOT be the one above, but the next one: A -> B -> D -> A -> B -> C : cost 30 + 1 + 1 + 0 + 1 = 33 Please note that step with cost 0. That's equivalent on the second step from A to B (the first time we did it counted 30). How can this rule be applied in the Hungarian method? Or any other branch and bound algorithm. I can't find a way to insert this cost change/morph in the algorithm logic. It occured to me to insert a new vertex in the graph as soon as I know that I travelled from A to B, ie. I would insert a new node - lets say AA - which path is A -> AA -> B and costs from A to AA and AA to B are both zero. This way the algorithm would prefer the path A -> AA -> B with cost zero when travelling from A to B. Even with this approach I can't find a way to insert this new vertex in the middle of the Hungarian algorithm execution. I have also searched on the Internet to see if someone already had the same problem, but I was not lucky to find something similar. Any help on this one is really appreciated! Regards and thank you GM Last edited by gmarques; November 17th 2011 at 03:29 AM. Follow Math Help Forum on Facebook and Google+ Similar Math Help Forum Discussions 1. TSP travelling salesperson problem Posted in the Advanced Math Topics Forum Replies: 7 Last Post: March 2nd 2010, 05:07 PM 2. Travelling Salesman Posted in the Advanced Statistics Forum Replies: 2 Last Post: April 27th 2009, 04:08 AM 3. travelling boat problem Posted in the Math Topics Forum Replies: 3 Last Post: October 15th 2007, 08:55 PM 4. Hamiltonian/Euler Problems (Travelling Salesman Prob) Posted in the Discrete Math Forum Replies: 5 Last Post: November 19th 2006, 09:18 PM Search Tags /mathhelpforum @mathhelpforum
__label__pos
0.762201
How to pass multiple props to component in Vue How to pass multiple props to component in Vue In this article, we will learn how to pass multiple props from the parent component to a child component in Vue js. In Vue, we can pass data from one component to another in the form of props. For Example, if we want to pass name and phone no as a prop from a parent component to a child component, we pass it like this. Parent Component: <template> <student title="John" phone="55593564" /> </tempalte> Child Component: <template> <div class="student"> Name: {{ name }} <br /> phone: {{ phoneno }} </div> </template> <script> export default { name: "Student", props: { name: { type: String, }, phoneno: { type: Number, }, }, }; </script> This way of passing props to a component works well if we have one or two props data. However, if we have to pass multiple props (more than two) together to a component, then the above way becomes tedious. So, if we have an object and we need to pass all its properties to the child component, we can use the v-bind without an argument (using v-bind instead of :prop-name). Let's see with an example. Passing multiple props as an object in Vue component. In Parent Component: <Student v-bind="studentObj" /> <script> export default{ data(){ return{ studentObj:{ name:'john', id: 123, subject:'Maths', year: 1997 } } } } Here, we have passed the student Object (studentObj) as a prop to the child component using v-bind directive. In Child Component: (Student.vue) <template> <div class="student"> Student Id : {{ id }} <br /> Name: {{ name }} <br /> phone: {{ year }} <br /> Subject : {{ subject }} </div> </template> <script> export default { name: "Student", props: { id: Number, name: String, year: Number, subject: String, }, }; </script> Once it is received, we can use it in the child component template using {{ }} . And in the child component, we have used prop validations: String and Number. It is used to validate the correct data type of the props. If a data type does not match, it will show an error in the browser's console. DEMO: pass multiple props Edit ojg7pd Related Topics: How to pass data from parent component to child using props in Vue Pass multiple objects as props in Vue
__label__pos
0.997246
Real device testing: What is it and when is it useful? 20 April 2023 katharina Leave a comment Hardware, QA, Test Methodology Real device testing is a critical aspect of the mobile app development process, as it allows developers and QAs to test the app’s performance, functionality, and usability on actual, physical devices. This is essential because it helps identify issues that may not be apparent during testing on emulators or simulators. In this article we look at the what and why of testing on real devices. More specifically, we’ll look at: What is real device testing? In general, you can test your mobile apps in two different ways: 1. Testing on real, physical devices Real device testing, also called local device testing , describes the testing of mobile apps on physical devices. This involves running the app on various devices with different operating systems, screen sizes, resolutions, and hardware configurations. This helps ensure that the app works well on different devices and provides a consistent user experience. To perform mobile testing on real devices, testers need to have access to a variety of devices with different configurations. They can use physical devices or cloud-based services that offer virtual access to a wide range of devices. During testing, testers need to perform a variety of tests, including functional testing, performance testing, usability testing, and compatibility testing. They also need to test the app’s security features to ensure that it is secure and does not pose a risk to users. 2. Testing through emulators or simulators The opposite of local device testing is testing on emulators or simulators. Emulators and simulators are software programs that replicate the behavior of real devices, allowing developers to test their apps without the need for physical devices. Emulators and simulators are typically faster and more convenient than testing on real devices, as they do not require physical access to the device. However, they may not accurately replicate the behavior of a real device and may miss certain issues that could be caught during testing on real devices. While testing on emulators or simulators can be useful in certain situations, it is generally recommended to also perform testing on real devices. This ensures the app works properly on a wide range of devices and provides a consistent user experience. What is the advantage of real device testing? Real device testing offers several advantages over testing on emulators or simulators. Here are some of the main reasons why to chose a real device for testing: Accurate testing environment Testing on local devices provides a more accurate testing environment as it allows testers to evaluate the app’s performance on real-world devices with varying hardware specifications and network conditions. This can help identify issues that may not be detected when testing on emulators or simulators. Better user experience Testing on real devices ensures that the app is tested in a real-world context and provides a better user experience. Testing on emulators or simulators may not fully replicate the user experience, which can lead to issues being missed. More comprehensive testing Testing on real devices allows testers to perform more comprehensive testing, including testing the app’s functionality, performance, usability, and security. Testing on emulators or simulators may not detect all issues and can lead to false positives or false negatives. Improved reliability Real device testing can help improve the reliability of the app by identifying and fixing issues that may cause the app to crash or malfunction. This can help ensure that the app functions correctly and provides a positive user experience. Devices with functions that are no available on emulators There are local devices that have very specific functionalities that are not available on emulators, for example satellite-antennas. Also, there are cases where the device does use Android as an OS, but it’s not a mobile phone, for example navigation systems, conference webcams, or car radios. When to test mobile apps on real devices? Mobile testing on physical devices should be done at various stages throughout the mobile app development process. Here are some instances when mobile testing on real devices is particularly important: 1. During initial development: start with testing as early as possible in the development process. This can help identify issues and bugs early on, which can save time and resources down the line. 2. Right before the release: Test on real devices thoroughly before releasing the app to the public. This can help ensure that the app works well on different devices and provides a positive user experience. 3. After updates or changes: Anytime there is a major update or change to the app, do testing to ensure that the update or change does not cause any issues or bugs. 4. When targeting specific devices or platforms: If the app is being developed specifically for certain devices or platforms, testing on those devices is particularly important. This ensures that the app works well and provides a good user experience. 5. When testing new features: When introducing new features, it is important to test them on real devices to ensure that they work properly and do not cause any issues. Thus, mobile testing on local devices is important throughout the development process, particularly during initial development, before release, after updates or changes, when targeting specific devices or platforms, and when testing new features. What are tools for mobile testing on real devices? There are a few tools available that help you with mobile testing on real devices. Here are two popular examples: Repeato With Repeato, testers can perform automated and manual testing on a wide range of devices and operating systems, including iOS and Android. The platform offers features like real-time device streaming, test automation, and detailed reporting. Appium Appium is an open-source tool that allows testers to automate testing on real devices, simulators, and emulators. It supports both iOS and Android platforms and can be used with different programming languages like Java, Python, Ruby, and more. These are just a few examples of the tools available for mobile testing on real devices. We have gathered more in our articles about Android testing tools and iOS testing tools. Keep in mind that in many cases you can test your apps either via cable, but you can also test devices via Wifi. Again, each of the tools and methods comes with their pros and cons. The choice of tool depends on factors like project requirements, budget, and the testing team’s skill set. In a nutshell Overall, mobile testing on real devices is an important part of the mobile app development process, as it helps ensure that the app is of high quality and provides a positive user experience across different devices and platforms. Tags: , Like this article? there’s more where that came from.
__label__pos
0.987685
Version:  2.0.40 2.2.26 2.4.37 3.10 3.11 3.12 3.13 3.14 3.15 3.16 3.17 3.18 3.19 4.0 4.1 4.2 4.3 4.4 4.5 4.6 4.7 Linux/drivers/media/pci/mantis/mantis_cards.c 1 /* 2 Mantis PCI bridge driver 3 4 Copyright (C) Manu Abraham ([email protected]) 5 6 This program is free software; you can redistribute it and/or modify 7 it under the terms of the GNU General Public License as published by 8 the Free Software Foundation; either version 2 of the License, or 9 (at your option) any later version. 10 11 This program is distributed in the hope that it will be useful, 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 GNU General Public License for more details. 15 16 You should have received a copy of the GNU General Public License 17 along with this program; if not, write to the Free Software 18 Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 */ 20 21 #include <linux/module.h> 22 #include <linux/moduleparam.h> 23 #include <linux/kernel.h> 24 #include <linux/pci.h> 25 #include <linux/slab.h> 26 #include <asm/irq.h> 27 #include <linux/interrupt.h> 28 #include <media/rc-map.h> 29 30 #include "dmxdev.h" 31 #include "dvbdev.h" 32 #include "dvb_demux.h" 33 #include "dvb_frontend.h" 34 #include "dvb_net.h" 35 36 #include "mantis_common.h" 37 38 #include "mantis_vp1033.h" 39 #include "mantis_vp1034.h" 40 #include "mantis_vp1041.h" 41 #include "mantis_vp2033.h" 42 #include "mantis_vp2040.h" 43 #include "mantis_vp3030.h" 44 45 #include "mantis_dma.h" 46 #include "mantis_ca.h" 47 #include "mantis_dvb.h" 48 #include "mantis_uart.h" 49 #include "mantis_ioc.h" 50 #include "mantis_pci.h" 51 #include "mantis_i2c.h" 52 #include "mantis_reg.h" 53 #include "mantis_input.h" 54 55 static unsigned int verbose; 56 module_param(verbose, int, 0644); 57 MODULE_PARM_DESC(verbose, "verbose startup messages, default is 0 (no)"); 58 59 static int devs; 60 61 #define DRIVER_NAME "Mantis" 62 63 static char *label[10] = { 64 "DMA", 65 "IRQ-0", 66 "IRQ-1", 67 "OCERR", 68 "PABRT", 69 "RIPRR", 70 "PPERR", 71 "FTRGT", 72 "RISCI", 73 "RACK" 74 }; 75 76 static irqreturn_t mantis_irq_handler(int irq, void *dev_id) 77 { 78 u32 stat = 0, mask = 0; 79 u32 rst_stat = 0, rst_mask = 0; 80 81 struct mantis_pci *mantis; 82 struct mantis_ca *ca; 83 84 mantis = (struct mantis_pci *) dev_id; 85 if (unlikely(mantis == NULL)) { 86 dprintk(MANTIS_ERROR, 1, "Mantis == NULL"); 87 return IRQ_NONE; 88 } 89 ca = mantis->mantis_ca; 90 91 stat = mmread(MANTIS_INT_STAT); 92 mask = mmread(MANTIS_INT_MASK); 93 if (!(stat & mask)) 94 return IRQ_NONE; 95 96 rst_mask = MANTIS_GPIF_WRACK | 97 MANTIS_GPIF_OTHERR | 98 MANTIS_SBUF_WSTO | 99 MANTIS_GPIF_EXTIRQ; 100 101 rst_stat = mmread(MANTIS_GPIF_STATUS); 102 rst_stat &= rst_mask; 103 mmwrite(rst_stat, MANTIS_GPIF_STATUS); 104 105 mantis->mantis_int_stat = stat; 106 mantis->mantis_int_mask = mask; 107 dprintk(MANTIS_DEBUG, 0, "\n-- Stat=<%02x> Mask=<%02x> --", stat, mask); 108 if (stat & MANTIS_INT_RISCEN) { 109 dprintk(MANTIS_DEBUG, 0, "<%s>", label[0]); 110 } 111 if (stat & MANTIS_INT_IRQ0) { 112 dprintk(MANTIS_DEBUG, 0, "<%s>", label[1]); 113 mantis->gpif_status = rst_stat; 114 wake_up(&ca->hif_write_wq); 115 schedule_work(&ca->hif_evm_work); 116 } 117 if (stat & MANTIS_INT_IRQ1) { 118 dprintk(MANTIS_DEBUG, 0, "<%s>", label[2]); 119 spin_lock(&mantis->intmask_lock); 120 mmwrite(mmread(MANTIS_INT_MASK) & ~MANTIS_INT_IRQ1, 121 MANTIS_INT_MASK); 122 spin_unlock(&mantis->intmask_lock); 123 schedule_work(&mantis->uart_work); 124 } 125 if (stat & MANTIS_INT_OCERR) { 126 dprintk(MANTIS_DEBUG, 0, "<%s>", label[3]); 127 } 128 if (stat & MANTIS_INT_PABORT) { 129 dprintk(MANTIS_DEBUG, 0, "<%s>", label[4]); 130 } 131 if (stat & MANTIS_INT_RIPERR) { 132 dprintk(MANTIS_DEBUG, 0, "<%s>", label[5]); 133 } 134 if (stat & MANTIS_INT_PPERR) { 135 dprintk(MANTIS_DEBUG, 0, "<%s>", label[6]); 136 } 137 if (stat & MANTIS_INT_FTRGT) { 138 dprintk(MANTIS_DEBUG, 0, "<%s>", label[7]); 139 } 140 if (stat & MANTIS_INT_RISCI) { 141 dprintk(MANTIS_DEBUG, 0, "<%s>", label[8]); 142 mantis->busy_block = (stat & MANTIS_INT_RISCSTAT) >> 28; 143 tasklet_schedule(&mantis->tasklet); 144 } 145 if (stat & MANTIS_INT_I2CDONE) { 146 dprintk(MANTIS_DEBUG, 0, "<%s>", label[9]); 147 wake_up(&mantis->i2c_wq); 148 } 149 mmwrite(stat, MANTIS_INT_STAT); 150 stat &= ~(MANTIS_INT_RISCEN | MANTIS_INT_I2CDONE | 151 MANTIS_INT_I2CRACK | MANTIS_INT_PCMCIA7 | 152 MANTIS_INT_PCMCIA6 | MANTIS_INT_PCMCIA5 | 153 MANTIS_INT_PCMCIA4 | MANTIS_INT_PCMCIA3 | 154 MANTIS_INT_PCMCIA2 | MANTIS_INT_PCMCIA1 | 155 MANTIS_INT_PCMCIA0 | MANTIS_INT_IRQ1 | 156 MANTIS_INT_IRQ0 | MANTIS_INT_OCERR | 157 MANTIS_INT_PABORT | MANTIS_INT_RIPERR | 158 MANTIS_INT_PPERR | MANTIS_INT_FTRGT | 159 MANTIS_INT_RISCI); 160 161 if (stat) 162 dprintk(MANTIS_DEBUG, 0, "<Unknown> Stat=<%02x> Mask=<%02x>", stat, mask); 163 164 dprintk(MANTIS_DEBUG, 0, "\n"); 165 return IRQ_HANDLED; 166 } 167 168 static int mantis_pci_probe(struct pci_dev *pdev, 169 const struct pci_device_id *pci_id) 170 { 171 struct mantis_pci_drvdata *drvdata; 172 struct mantis_pci *mantis; 173 struct mantis_hwconfig *config; 174 int err = 0; 175 176 mantis = kzalloc(sizeof(struct mantis_pci), GFP_KERNEL); 177 if (mantis == NULL) { 178 printk(KERN_ERR "%s ERROR: Out of memory\n", __func__); 179 return -ENOMEM; 180 } 181 182 drvdata = (void *)pci_id->driver_data; 183 mantis->num = devs; 184 mantis->verbose = verbose; 185 mantis->pdev = pdev; 186 config = drvdata->hwconfig; 187 config->irq_handler = &mantis_irq_handler; 188 mantis->hwconfig = config; 189 mantis->rc_map_name = drvdata->rc_map_name; 190 191 spin_lock_init(&mantis->intmask_lock); 192 193 err = mantis_pci_init(mantis); 194 if (err) { 195 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis PCI initialization failed <%d>", err); 196 goto err_free_mantis; 197 } 198 199 err = mantis_stream_control(mantis, STREAM_TO_HIF); 200 if (err < 0) { 201 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis stream control failed <%d>", err); 202 goto err_pci_exit; 203 } 204 205 err = mantis_i2c_init(mantis); 206 if (err < 0) { 207 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis I2C initialization failed <%d>", err); 208 goto err_pci_exit; 209 } 210 211 err = mantis_get_mac(mantis); 212 if (err < 0) { 213 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis MAC address read failed <%d>", err); 214 goto err_i2c_exit; 215 } 216 217 err = mantis_dma_init(mantis); 218 if (err < 0) { 219 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis DMA initialization failed <%d>", err); 220 goto err_i2c_exit; 221 } 222 223 err = mantis_dvb_init(mantis); 224 if (err < 0) { 225 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis DVB initialization failed <%d>", err); 226 goto err_dma_exit; 227 } 228 229 err = mantis_input_init(mantis); 230 if (err < 0) { 231 dprintk(MANTIS_ERROR, 1, 232 "ERROR: Mantis DVB initialization failed <%d>", err); 233 goto err_dvb_exit; 234 } 235 236 err = mantis_uart_init(mantis); 237 if (err < 0) { 238 dprintk(MANTIS_ERROR, 1, "ERROR: Mantis UART initialization failed <%d>", err); 239 goto err_input_exit; 240 } 241 242 devs++; 243 244 return 0; 245 246 err_input_exit: 247 mantis_input_exit(mantis); 248 249 err_dvb_exit: 250 mantis_dvb_exit(mantis); 251 252 err_dma_exit: 253 mantis_dma_exit(mantis); 254 255 err_i2c_exit: 256 mantis_i2c_exit(mantis); 257 258 err_pci_exit: 259 mantis_pci_exit(mantis); 260 261 err_free_mantis: 262 kfree(mantis); 263 264 return err; 265 } 266 267 static void mantis_pci_remove(struct pci_dev *pdev) 268 { 269 struct mantis_pci *mantis = pci_get_drvdata(pdev); 270 271 if (mantis) { 272 273 mantis_uart_exit(mantis); 274 mantis_input_exit(mantis); 275 mantis_dvb_exit(mantis); 276 mantis_dma_exit(mantis); 277 mantis_i2c_exit(mantis); 278 mantis_pci_exit(mantis); 279 kfree(mantis); 280 } 281 return; 282 } 283 284 static struct pci_device_id mantis_pci_table[] = { 285 MAKE_ENTRY(TECHNISAT, CABLESTAR_HD2, &vp2040_config, 286 RC_MAP_TECHNISAT_TS35), 287 MAKE_ENTRY(TECHNISAT, SKYSTAR_HD2_10, &vp1041_config, 288 NULL), 289 MAKE_ENTRY(TECHNISAT, SKYSTAR_HD2_20, &vp1041_config, 290 NULL), 291 MAKE_ENTRY(TERRATEC, CINERGY_C, &vp2040_config, 292 RC_MAP_TERRATEC_CINERGY_C_PCI), 293 MAKE_ENTRY(TERRATEC, CINERGY_S2_PCI_HD, &vp1041_config, 294 RC_MAP_TERRATEC_CINERGY_S2_HD), 295 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_1033_DVB_S, &vp1033_config, 296 NULL), 297 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_1034_DVB_S, &vp1034_config, 298 NULL), 299 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_1041_DVB_S2, &vp1041_config, 300 RC_MAP_TWINHAN_DTV_CAB_CI), 301 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_2033_DVB_C, &vp2033_config, 302 RC_MAP_TWINHAN_DTV_CAB_CI), 303 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_2040_DVB_C, &vp2040_config, 304 NULL), 305 MAKE_ENTRY(TWINHAN_TECHNOLOGIES, MANTIS_VP_3030_DVB_T, &vp3030_config, 306 NULL), 307 { } 308 }; 309 310 MODULE_DEVICE_TABLE(pci, mantis_pci_table); 311 312 static struct pci_driver mantis_pci_driver = { 313 .name = DRIVER_NAME, 314 .id_table = mantis_pci_table, 315 .probe = mantis_pci_probe, 316 .remove = mantis_pci_remove, 317 }; 318 319 module_pci_driver(mantis_pci_driver); 320 321 MODULE_DESCRIPTION("MANTIS driver"); 322 MODULE_AUTHOR("Manu Abraham"); 323 MODULE_LICENSE("GPL"); 324 This page was automatically generated by LXR 0.3.1 (source).  •  Linux is a registered trademark of Linus Torvalds  •  Contact us
__label__pos
0.992481
Location: PHPKode > scripts > wpStoreCart > wpstorecart/php/screen-meta-links.php <?php /** * @author Janis Elsts * @copyright 2010 */ if ( !class_exists('wsScreenMetaLinks10') ): //Load JSON functions for PHP < 5.2 if (!class_exists('Services_JSON')){ //require ABSPATH . WPINC . '/class-json.php'; } class wsScreenMetaLinks10 { var $registered_links; //List of meta links registered for each page. /** * Class constructor. * * @return void */ function wsScreenMetaLinks10(){ $this->registered_links = array(); add_action('admin_notices', array(&$this, 'append_meta_links')); add_action('admin_print_styles', array(&$this, 'add_link_styles')); } /** * Add a new link to the screen meta area. * * Do not call this method directly. Instead, use the global add_screen_meta_link() function. * * @param string $id Link ID. Should be unique and a valid value for a HTML ID attribute. * @param string $text Link text. * @param string $href Link URL. * @param string|array $page The page(s) where you want to add the link. * @param array $attributes Optional. Additional attributes for the link tag. * @return void */ function add_screen_meta_link($id, $text, $href, $page, $attributes = null){ if ( !is_array($page) ){ $page = array($page); } if ( is_null($attributes) ){ $attributes = array(); } //Basically a list of props for a jQuery() call $link = compact('id', 'text', 'href'); $link = array_merge($link, $attributes); //Add the CSS classes that will make the look like a proper meta link if ( empty($link['class']) ){ $link['class'] = ''; } $link['class'] = 'show-settings custom-screen-meta-link ' . $link['class']; //Save the link in each relevant page's list foreach($page as $page_id){ if ( !isset($this->registered_links[$page_id]) ){ $this->registered_links[$page_id] = array(); } $this->registered_links[$page_id][] = $link; } } /** * Output the JS that appends the custom meta links to the page. * Callback for the 'admin_notices' action. * * @access private * @return void */ function append_meta_links(){ global $hook_suffix; //Find links registered for this page $links = $this->get_links_for_page($hook_suffix); if ( empty($links) ){ return; } global $wp_db_version; if ( $wp_db_version < 18715 ) { // If this is less than Wordpress 3.3 Beta 1 $screenmeta = '#screen-meta-links'; } if ( $wp_db_version >= 18715 ) { // If this is equal to or greater than Wordpress 3.3 Beta 1 echo '<div id="screen-meta-links-wpsc"></div>'; $screenmeta = '#screen-meta-links-wpsc'; } ?> <script type="text/javascript"> (function($, links){ var container = $("<?PHP echo $screenmeta;?>"); for(var i = 0; i < links.length; i++){ container.append( $('<div/>') .attr({ 'id' : links[i].id + '-wrap', 'class' : 'hide-if-no-js screen-meta-toggle custom-screen-meta-link-wrap' }) .append( $('<a/>', links[i]) ) ); } })(jQuery, <?php echo $this->json_encode($links); ?>); </script> <?php } /** * Get a list of custom screen meta links registered for a specific page. * * @param string $page * @return array */ function get_links_for_page($page){ $links = array(); if ( isset($this->registered_links[$page]) ){ $links = array_merge($links, $this->registered_links[$page]); } $page_as_screen = $this->page_to_screen_id($page); if ( ($page_as_screen != $page) && isset($this->registered_links[$page_as_screen]) ){ $links = array_merge($links, $this->registered_links[$page_as_screen]); } return $links; } /** * Output the CSS code for custom screen meta links. Required because WP only * has styles for specific meta links (by #id), not meta links in general. * * Callback for 'admin_print_styles'. * * @access private * @return void */ function add_link_styles(){ global $hook_suffix; //Don't output the CSS if there are no custom meta links for this page. $links = $this->get_links_for_page($hook_suffix); if ( empty($links) ){ return; } ?> <style type="text/css"> .custom-screen-meta-link-wrap { float: right; height: 22px; padding: 0; margin: 0 6px 0 0; font-family: "Lucida Grande", Verdana, Arial, "Bitstream Vera Sans", sans-serif; background: #e3e3e3; border-bottom-left-radius: 3px; border-bottom-right-radius: 3px; -moz-border-radius-bottomleft: 3px; -moz-border-radius-bottomright: 3px; -webkit-border-bottom-left-radius: 3px; -webkit-border-bottom-right-radius: 3px; } #screen-meta .custom-screen-meta-link-wrap a.custom-screen-meta-link { background-image: none; padding-right: 6px; } </style> <?php } /** * Convert a page hook name to a screen ID. * * @uses convert_to_screen() * @access private * * @param string $page * @return string */ function page_to_screen_id($page){ if ( function_exists('convert_to_screen') ){ $screen = convert_to_screen($page); if ( isset($screen->id) ){ return $screen->id; } else { return ''; } } else { return str_replace( array('.php', '-new', '-add' ), '', $page); } } /** * Back-wards compatible json_encode(). Used to encode link data before * passing it to the JavaScript that actually creates the links. * * @param mixed $data * @return string */ function json_encode($data){ if ( function_exists('json_encode') ){ return json_encode($data); } else { $json = new Services_JSON(); return( $json->encodeUnsafe($data) ); } } } global $ws_screen_meta_links_versions; if ( !isset($ws_screen_meta_links_versions) ){ $ws_screen_meta_links_versions = array(); } $ws_screen_meta_links_versions['1.0'] = 'wsScreenMetaLinks10'; endif; /** * Add a new link to the screen meta area. * * @param string $id Link ID. Should be unique and a valid value for a HTML ID attribute. * @param string $text Link text. * @param string $href Link URL. * @param string|array $page The page(s) where you want to add the link. * @param array $attributes Optional. Additional attributes for the link tag. * @return void */ function add_screen_meta_link($id, $text, $href, $page, $attributes = null){ global $ws_screen_meta_links_versions; static $instance = null; if ( is_null($instance) ){ //Instantiate the latest version of the wsScreenMetaLinks class uksort($ws_screen_meta_links_versions, 'version_compare'); $className = end($ws_screen_meta_links_versions); $instance = new $className; } return $instance->add_screen_meta_link($id, $text, $href, $page, $attributes); } ?> Return current item: wpStoreCart
__label__pos
0.720704
Angular 7: TypeScript Angular is built in TypeScript Angular is built in a JavaScript-like language called TypeScript. You might be skeptical of using a new language just for Angular, but it turns out, there are a lot of great reasons to use TypeScript instead of plain JavaScript. TypeScript isn’t a completely new language, it’s a superset of ES6. If we write ES6 code, it’s perfectly valid and compilable TypeScript code. Here’s a diagram that shows the relationship between the languages: ES5, ES6, and TypeScript What is ES5? What is ES6? ES5 is short for “ECMAScript 5”, otherwise known as “regular JavaScript”. ES5 is the normal JavaScript we all know and love. It runs in more-or-less every browser. ES6 is the next version of JavaScript, which we talk more about below. At the publishing of this book, very few browsers will run ES6 out of the box, much less TypeScript. To solve this issue we have transpilers (or sometimes called transcompiler). The TypeScript transpiler takes our TypeScript code as input and outputs ES5 code that nearly all browsers understand. For converting TypeScript to ES5 there is a single transpiler written by the core TypeScript team. However if we wanted to convert ES6 code (not TypeScript) to ES5 there are two major ES6-to-ES5 transpilers: traceur by Google and babel created by the JavaScript community. We’re not going to be using either directly for this book, but they’re both great projects that are worth knowing about. We installed TypeScript in the last chapter, but in case you’re just starting out in this chapter, you can install it like so: npm install -g typescript TypeScript is an official collaboration between Microsoft and Google. That’s great news because with two tech heavyweights behind it we know that it will be supported for a long time. Both groups are committed to moving the web forward and as developers we win because of it. One of the great things about transpilers is that they allow relatively small teams to make improvements to a language without requiring everyone on the internet upgrade their browser. One thing to point out: we don’t have to use TypeScript with Angular2. If you want to use ES5 (i.e. “regular” JavaScript), you definitely can. There is an ES5 API that provides access to all functionality of Angular2. Then why should we use TypeScript at all? Because there are some great features in TypeScript that make development a lot better. What do we get with TypeScript? There are five big improvements that TypeScript bring over ES5: Let’s deal with these one at a time. Types The major improvement of TypeScript over ES6, that gives the language its name, is the typing system. For some people the lack of type checking is considered one of the benefits of using a language like JavaScript. You might be a little skeptical of type checking but I’d encourage you to give it a chance. One of the great things about type checking is that 1. it helps when writing code because it can prevent bugs at compile time and 2. it helps when reading code because it clarifies your intentions It’s also worth noting that types are optional in TypeScript. If we want to write some quick code or prototype a feature, we can omit types and gradually add them as the code becomes more mature. TypeScript’s basic types are the same ones we’ve been using implicitly when we write “normal” JavaScript code: strings, numbers, booleans, etc. Up until ES5, we would define variables with the var keyword, like var fullName;. The new TypeScript syntax is a natural evolution from ES5, we still use var but now we can optionally provide the variable type along with its name: var fullName: string; When declaring functions we can use types for arguments and return values: function greetText(name: string): string { return "Hello " + name; } In the example above we are defining a new function called greetText which takes one argument: name. The syntax name: string says that this function expects name to be a string. Our code won’t compile if we call this function with anything other than a string and that’s a good thing because otherwise we’d introduce a bug. Notice that the greetText function also has a new syntax after the parentheses: : string {. The colon indicates that we will specify the return type for this function, which in this case is a string. This is helpful because 1. if we accidentally return anything other than a string in our code, the compiler will tell us that we made a mistake and 2. any other developers who want to use this function know precisely what type of object they’ll be getting. Let’s see what happens if we try to write code that doesn’t conform to our declared typing: function hello(name: string): string { return 12; } If we try to compile it, we’ll see the following error: $ tsc compile-error.ts compile-error.ts(2,12): error TS2322: Type 'number' is not assignable to type 'string'. What happened here? We tried to return 12 which is a number, but we stated that hello would return a string (by putting the ): string { after the argument declaration). In order to correct this, we need to update the function declaration to return a number: function hello(name: string): number { return 12; } This is one small example, but already we can see that by using types it can save us from a lot of bugs down the road. So now that we know how to use types, how can we know what types are available to use? Let’s look at the list of built-in types, and then we’ll figure out how to create our own. Trying it out with a REPL To play with the examples in this chapter, let’s install a nice little utility called TSUN (TypeScript Upgraded Node): $ npm install -g tsun Now start tsun: $ tsun TSUN : TypeScript Upgraded Node type in TypeScript expression to evaluate type :help for commands in repl > That little > is the prompt indicating that TSUN is ready to take in commands. In most of the examples below, you can copy and paste into this terminal and follow along. Built-in types String A string holds text and is declared using the string type: var fullName: string = 'Nate Murray'; Number A number is any type of numeric value. In TypeScript, all numbers are represented as floating point. The type for numbers is number: var age: number = 36; Boolean The boolean holds either true or false as the value. var married: boolean = true; Array Arrays are declared with the Array type. However, because an Array is a collection, we also need to specify the type of the objects in the Array. We specify the type of the items in the array with either the Array<type> or type[] notations: var jobs: Array<string> = ['IBM', 'Microsoft', 'Google']; var jobs: string[] = ['Apple', 'Dell', 'HP']; Or similarly with a number: var chickens: Array<number> = [1, 2, 3]; var chickens: number[] = [4, 5, 6]; Enums Enums work by naming numeric values. For instance, if we wanted to have a fixed list of roles a person may have we could write this: enum Role {Employee, Manager, Admin}; var role: Role = Role.Employee; The default initial value for an enum is 0, though you can set the starting enum number like this: enum Role {Employee = 3, Manager, Admin}; var role: Role = Role.Employee; In the code above, instead of Employee being 0, Employee is 3. The value of the enum increments from there, which means Manager is 4 and Admin is 5, and we can even set individual values: enum Role {Employee = 3, Manager = 5, Admin = 7}; var role: Role = Role.Employee; You can also look up the name of a given enum by using its value: enum Role {Employee, Manager, Admin}; console.log('Roles: ', Role[0], ',', Role[1], 'and', Role[2]); Any any is the default type if we omit typing for a given variable. Having a variable of type any allows it to receive any kind of value: var something: any = 'as string'; something = 1; something = [1, 2, 3]; Void Using void means there’s no type expected. This is usually in functions with no return value: function setName(name: string): void { this.fullName = name; } Classes In JavaScript ES5 object oriented programming was accomplished by using prototype-based objects. This model doesn’t use classes, but instead relies on prototypes. A number of good practices have been adopted by the JavaScript community to compensate the lack of classes. A good summary of those good practices can be found in Mozilla Developer Network’s JavaScript Guide, and you can find a good overview on the Introduction to Object-Oriented JavaScript page. However, in ES6 we finally have built-in classes in JavaScript. To define a class we use the new class keyword and give our class a name and a body: class Vehicle { } Classes may have properties, methods, and constructors. Properties Properties define data attached to an instance of a class. For example, a class named Person might have properties like first_name, last_name and age. Each property in a class can optionally have a type. For example, we could say that the first_name and last_name properties are strings and the age property is a number. The declaration for a Person class that looks like this: class Person { first_name: string; last_name: string; age: number; } Methods Methods are functions that run in context of an object. To call a method on an object, we first have to have an instance of that object. To instantiate a class, we use the new keyword. Use new Person() to create a new instance of the Person class, for example. If we wanted to add a way to greet a Person using the class above, we would write something like: {lang=javascript,line-numbers=off} class Person { first_name: string; last_name: string; age: number; // leanpub-start-insert greet() { console.log("Hello", this.first_name); } // leanpub-end-insert } Notice that we’re able to access the first_name for this Person by using the this keyword and calling this.first_name. When methods don’t declare an explicit returning type and return a value, it’s assumed they can return anything (any type). However, in this case we are returning void, since there’s no explicit return statement. Note that a void value is also a valid any value. In order to invoke the greet method, you would need to first have an instance of the Person class. Here’s how we do that: {lang=javascript,line-numbers=off} // declare a variable of type Person var p: Person; // instantiate a new Person instance p = new Person(); // give it a first_name p.first_name = 'Felipe'; // call the greet method p.greet(); You can declare a variable and instantiate a class on the same line if you want: var p: Person = new Person();   This page is a preview of ng-book 2. Get the rest of this chapter plus hundreds of pages Angular 7 instruction, 5 sample projects, a screencast, and more.   Ready to master Angular 7? • What if you could master the entire framework – with solid foundations – in less time without beating your head against a wall? Imagine how quickly you could work if you knew the best practices and the best tools? • Stop wasting your time searching and have everything you need to be productive in one, well-organized place, with complete examples to get your project up without needing to resort to endless hours of research. • You will learn what you need to know to work professionally with ng-book: The Complete Book on Angular 7 or get your money back. Download the First Chapter (for free)  
__label__pos
0.950318
Burst error BC0102: Unexpected internal compiler error Accidentally ran into an internal burst compile error while refactoring some code. Error Simple repo code public struct TestData { public int3 Min; public int Size; } [BurstCompile] private struct Repo : IJob { public NativeReference<TestData> Root; public void Execute() { this.FailMethod((TestData*)this.Root.GetUnsafePtr()); } private void FailMethod(in TestData* node) { var aabb = new Unity.Mathematics.MinMaxAABB { Min = node->Min, Max = node->Size }; } } The issue is just the in TestData* node Removing the ‘in’ and burst compiles fine. This is a pretty irrelevant bug as the ‘in’ doesn’t do anything on a pointer as far as I’m aware, I was just refactoring a method and accidentally left it on. Only reporting it just because as far as I’m aware this is technically valid code and the repo was very simple. 2 Likes Good find! I can repro it, so I’ll fix it. Thanks!
__label__pos
0.932232
HomeHow Database Conversion Software works? ➜ How DRPU Conversion Software – Oracle to MySQL Works? How DRPU Conversion Software – Oracle to MySQL Works? Step 1: Enter the required fields in the right panel to establish connection with MySQL Server - Similarly enter the required fields in the left panel to establish connection with Oracle Database. Establish connection Step 2: Choose Table attributes like converting views, skip converting indexes, convert table(s) definitions only that you want to convert. Choose Table attributes Step 3: Following window will open in case of converting view option (if selected), choose views from a list to convert. Choose views from a list Step 4: Following screenshot shows progress of database conversion process. Click “Stop” button to abort ongoing conversion process and press “Skip” button to skip any particular table. Database conversion process
__label__pos
0.991897
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%% flag.pi %%% by Neng-Fa Zhou, Salvador Abreu, and Ulrich Neumerkel %%% http://cmpe.emu.edu.tr/bayram/courses/531/Prolog%20Competition/ppc2009.pdf %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% main => test. test => flag(5). flag(N) => printf(" _\n (_)\n<___>\n"), NR = 2*N+4, NC = 5*N+5, A = new_array(NR,NC), p(A,1,5,N), foreach(I in 1..NR) A[I,2] = '|', A[I,4] = '|', foreach (J in 1..NC) (var(A[I,J]) -> print(' ') ; print(A[I,J])), (J == NC -> nl ; true ) end end. p(A,I,J,1) => foreach(C in 0..4) A[I,J+C] = '_', A[I+4,J+C] = '~' end, foreach(C in 1..3) A[I+C,J+4] = '|' end. p(A,I,J,N) => foreach(C in 0..4) A[I,J+C] = '_' end, foreach(C in 0..3) A[I+4,J+C] = '~' end, A[I+1,J+5] = ')', A[I+2,J+4] = '(', A[I+3,J+4] = '|', A[I+4,J+4] = '|', A[I+5,J+4] = '|', A[I+6,J+4] = '~', p(A,I+2,J+5,N-1).
__label__pos
0.904403
Questions about Ruby, dynamic, reflective, general-purpose object-oriented programming language that combines syntax inspired by Perl with Smalltalk-like features. learn more… | top users | synonyms 3 votes 1answer 54 views How do I go about setting up my Sinatra REST API on a server? I'm an iOS developer primarily. In building my current app, I needed a server that would have a REST API with a couple of GET requests. I spent a little time learning Ruby, and landed on using ... -1 votes 0answers 35 views How much poppler-utils is scalable? We need to process large number of PDFs simultaneously to extract text, images and create htmls. We have been using poppler-utils for PDF processing and Ruby for making system calls to poppler-util ... 1 vote 1answer 137 views What's the best practice for adding a lot of attributes to a Rails model? So, I'm building an API wrapper gem that works with Spree's Product model. The API provides extensive customization of the data you send to it. I would like a user of the gem to be able to take ... 34 votes 8answers 7k views Explanation on how “Tell, Don't Ask” is considered good OO This blogpost was posted on Hacker News with several upvotes. Coming from C++, most of these examples seem to go against what I've been taught. Such as example #2: Bad: def ... 3 votes 2answers 93 views Is a guardfile part of the private developers environment or the public OSS project? Let us say I have an open source project on github. Now I wish to include tools required to develop the project so others can easily contribute. It is hard for me to tell when these tools should be ... 9 votes 1answer 210 views Python's join seems to focus not on the items to join, but on the symbol, as compared to Ruby or Smalltalk, for a design reason? I thought one of the cornerstone of OOP is that, we have objects, which are the items we are interested in dealing with, and then we send messages to them. So it may seem natural that, I have a ... 5 votes 1answer 116 views Testing procedural code TL;DR. Writing procedural code within a DB transaction. How can I improve design of the code so it's better testable? In my application I have a service object that perform multiple things within the ... 0 votes 2answers 67 views Using symbols instead of strings in conditions I usually have if/else conditions which involves comparing values with a constant string. Is it really advantageous to use symbols in such cases or use string. For eg. if status == 'submitted' ... ... 5 votes 4answers 168 views Ruby: if variable vs if variable.nil? I'm new to Ruby and I was surprised when I found out that all objects are true apart from nil and false. Even 0 is true. A nice thing about that property of the language is that you can write: if ... 6 votes 2answers 7k views Why do people suggest not to use instance variable for views in Ruby on Rails Why do I hear that it is not good to share instance variables between controllers and views. I kind of like it because I can see immediately via the @ that something is coming from the controller. I ... 1 vote 1answer 854 views ruby-idiomatic hashes vs arrays So I am still fairly new to ruby, though I have noticed that it is very hard to create 2d-array and that hashes seem to be more of the go to data structure than arrays. I was wondering why the Ruby ... 0 votes 1answer 70 views Does extending a ruby class violate the LSP? I am reading about SOLID principles. In Ruby tutorials and code samples, I often see subclass extensions like: class House attr_reader :items end class Room < House attr_reader :chair end p ... 0 votes 0answers 63 views Ruby best practices for Data Access layers I'm starting my way into Ruby development and have some questions that I hope you can give your best opinion to better design some command line applications I'm developing. What are the best patterns ... 5 votes 2answers 397 views How are scripting languages compiled? I know the term "scripting languages" is just a subset of programming languages, but I want to refer to programming languages such as Python and Ruby among others. First of all, why don't we need a ... 2 votes 5answers 331 views What happened to VM based deployments? Watched some MountainWest RubyConf 2014 talks and noticed an interesting theme. Many dynamic programming environments back in the old days used to be self-contained VM images, e.g. SmallTalk, ... 0 votes 0answers 48 views Add-on hot deployable modules for Akka actors? I'm trying to build a small spray io - akka distributed application targeted towards smaller devices like the Raspberry Pi and BeagleBone Black. The aim is to be able to talk to other devices that ... 1 vote 0answers 63 views Where did the tradition of new releases on Christmas Day start? [closed] Perl 6 is being released on Christmas Day this year. I'm familiar with Ruby having a history of releases on Christmas Day, going back as far as Ruby 1.0 in 1996 if not further. However, a quick scan ... 1 vote 0answers 68 views Terminology - Difference between thread and process and how they manage DB connections I've been working more with concurrency in Ruby recently, and I keep seeing various articles using "threads" and "process" interchangeably. What are the actual definitions of these terms? Is a ... 4 votes 2answers 6k views Ruby Sinatra best practices for project structure [closed] Many people praise Sinatra's ability to create simple projects in a single file. This is great and all but what if I want to use Sinatra for slightly larger project? I'm fairly new to Ruby as a ... 2 votes 1answer 251 views Android App with Ruby Backend Server I'm working on a personal project to help me branch out and learn some new/different technologies. I'm a .NET programmer but I want to learn Ruby and how to develop Android apps. I have developed ... 1 vote 4answers 468 views What is the “correct” way to store functions in a database? Note: Yes, I know that storing functions in databases should be punishable by law. We are developing a financial web application using PostgreSQL, Sinatra and AngularJS. As you may have guessed, a ... 1 vote 2answers 582 views Is the time complexity of a while loop with three pointers different than 3 nested for loops? This program (written in ruby) finds the largest 3 numbers in an array (without sorting the array). It has a while loop with three pointers. My fist instinct, since there is only one loop is to say ... 0 votes 1answer 174 views Best way to design a database interface [duplicate] This is my situation I have a website, mobile client and desktop client. They can all pretty much do the same operations (website might be able to do more now but in the future they might have the ... 1 vote 0answers 135 views Securing a private API used by an iOS App I have an app that uses an API server and I do not want to have anything other than it to be able to use that API. I know this isn't totally possible, but I want to do what I can. I don't think my ... 1 vote 4answers 553 views Is there any situation when there's no alternative to instanceof? It seems to me that instanceof comes from the land of functional programming and is a watered down version of pattern matching and that the OO altenative to it should be dynamic dispatching. In OO ... -2 votes 1answer 209 views Why is Ruby's interpreter so small? [closed] I noticed that the filesize of Ruby's interpreter seems suspiciously small. I would have expected /bin/dash to be the smallest of all, but is 20x larger than Ruby: Interpreter Bytes ... 10 votes 8answers 3k views How do I design a subclass whose method contradicts its superclass? [duplicate] How do I design a subclass whose method contradicts its superclass? Let's say we have this example: # Example 1 class Scissor def right_handed? true end end class LeftHandedScissor < ... 1 vote 3answers 507 views How can I rank teams based off of head to head wins/losses I'm trying to write an algorithm (specifically in Ruby) that will rank teams based on their record against each other. If a team A and team B have won the same amount of games against each other, then ... 29 votes 5answers 12k views Return random `list` item by its `weight` I have, for example, this table +-----------------+ | fruit | weight | +-----------------+ | apple | 4 | | orange | 2 | | lemon | 1 | +-----------------+ I need to return random ... 0 votes 3answers 151 views Would it be possible to create a language similar to Ruby/Python with static typing that had the speed/memory usage of a compiled C program? [closed] One of the main drawbacks of Ruby/Python is performance. I understand that they are interpreted and C is compiled. (And there are things like JRuby which do JIT compilation with Ruby). But they never ... 5 votes 3answers 701 views What can procs and lambdas do that functions can't in ruby I've been working in Ruby for the last couple weeks, and I've come to the subject of procs, lambdas and blocks. After reading a fair share of examples from a variety of sources, I don't how they're ... 8 votes 4answers 1k views PHP and Ruby: how to leverage both? and, is it worth it? [closed] As you might have noticed from the title, this is not a "PHP or Ruby", or a "PHP vs. Ruby" question. This is a question on how to leverage PHP + Ruby in the same business. I myself am a PHP ... 0 votes 0answers 40 views Is listing Types in documentation a code smell I'm documenting a ruby application and looking closely at YARD. I like the increased structure provided over RDoc, but I'm unsure about the Type declaration. In YARD, you can do something like ... 9 votes 4answers 551 views Is Non-Deterministic Resource-Management a Leaky Abstraction? From what I can see, there are two pervasive forms of resource-management: deterministic destruction and explicit. Examples of the former would be C++ destructors and smart pointers or Perl's DESTROY ... 0 votes 2answers 108 views Refactoring case-when statement [duplicate] The Product model which can have 2 price tables. The method price calculates the price of the product based on product's price table which is defined by price_table field. class Product < ... 2 votes 1answer 167 views How do I distinguish between things belonging to the standard library, specific gems, and those that are user-generated in Ruby? I'm a beginning programmer, that for various reasons is using an existing Ruby codebase to learn to program. My goal is to be able to understand and eventually extend this codebase. However, I find it ... 4 votes 1answer 4k views What is a closure and how is it implemented in Ruby? In the context of the Ruby programming language, what is a closure and when do you use one? What are the uses for it in Rails? 0 votes 0answers 330 views Online stores service design [duplicate] I am designing an online store service app with rails. Everyone who wants to make his/her own store can do it with this app by just signing up and choosing a domain. I want to make it as SaaS as it ... 2 votes 3answers 248 views Which is the convention in Rails to perform calculations and display the results? So I'm working on a personal project on Rails to learn more about this framework, and I wanted to add a feature in which the user choose a particular record from the view, and after some calculations ... 1 vote 3answers 86 views Replace repeated timestamp with variable in tests I have couple hundred tests and I work with date/time a lot. In some tests, I aim for output format, elsewhere, I check date ranges. Therefore, my tests have lots of these: FactoryGirl.create(:foo, ... 5 votes 5answers 2k views How can I move from Java and ColdFusion to Ruby on Rails? [closed] Currently I work with ColdFusion 9+ and some Java in a Windows environment. Prior to ColdFusion, my background was in Java and JSP. I'm considering a move towards Ruby on Rails, as I think it would ... 4 votes 7answers 10k views How much Ruby should I learn before moving to Rails? [closed] Just a quick question.. I can never get a definitive answer when googling this, either. Some people say you can learn Rails without knowing any Ruby, but at some point you'll run into a brick wall and ... 2 votes 0answers 74 views Writing a gem supporting compiled languages with Rake. How to test? I want to create a gem that extends the functionality of Rake, creating commands for compiling .NET code.* Basically, I want to be able to write a Rakefile like this: desc "Build main executable" ... 1 vote 2answers 407 views How does the consumer-producer solution work? I'm only a beginner, and my book doesn't cover this subject. I have researched my problem and found that an implementation of the consumer-producer pattern is the ideal solution, and have Googled it, ... 2 votes 3answers 303 views Questions for Architecture with Ruby and Java [closed] I am in the research phase of a project that needs to make use of 3rd party libraries that are in Java so I am stuck using Java to at least a small degree. I am considering implementing Ruby as the ... 0 votes 2answers 135 views Ruby Terminology Question: Implicit Declaration or no declaration at all? I have a question, because in most serious sources I read, all say the same No variable is ever declared in Ruby, but still I have doubts, because according to what I was taught in my university ... -1 votes 1answer 120 views Is obsolete to study older versions of Ruby and RoR? [closed] I want to study Ruby and RoR for some things I'll have to make, but almost every book, videos or any kind of source I find is outated, where the most current book I could find (link here) is one of ... 1 vote 3answers 1k views Why would you want to use an array, or hash as hash key in ruby? i'm using Ruby 1.9.3 I figured out that you can use an array, or a hash as hash key in ruby: h = Hash.new h[Array.new] = "Why?" h[Array.new] # Output: "Why?" h[Hash.new] = "It doesn't make sense" ... 1 vote 2answers 190 views In Ruby, change global in thread safe block In Ruby, I have a use case for a few common configuration options, e.g. NOOP, TRACE, SILENT. Right now I am using local vars instead of globals and passing them around all over the place and it's a ... 8 votes 2answers 382 views Why don't Python and Ruby make a distinction between declaring and assigning a value to variables? Two of the most popular dynamically typed scripting languages, Python and Ruby, do not make a distinction in syntax between the declaration of a variable and assignation of a value to it. That is in ...
__label__pos
0.540778
Jump to content Sign in to follow this   ame1011 GDI Screenshot + _WinHttp Recommended Posts ame1011 Hi, What I would like to accomplish is to take a screenshot and upload it to a remote php file via _winHTTP. Previously, we were taking the screenshots and saving them to a network folder. However, we would now like to alter this so that it posts the information instead through win http. Please see the following code sample (note it does NOT run, it's just for reference). $hbitmap = _ScreenCapture_Capture('', $iScreenCapDimensions[1] , $iScreenCapDimensions[2], $iScreenCapDimensions[3], $iScreenCapDimensions[4]) _SavehBitmapEx($hbitmap, 100000000, _WinAPI_GetSystemMetrics(78), _WinAPI_GetSystemMetrics(79)) Func _SavehBitmapEx($hbitmap, $iID, $iWidth, $iHeight) Local $save_result = true $bitmap = _GDIPlus_BitmapCreateFromHBITMAP($hbitmap) $graphics = _GDIPlus_ImageGetGraphicsContext($bitmap) $resizedbitmap = _GDIPlus_BitmapCreateFromGraphics($iWidth, $iHeight, $graphics) $graphics2 = _GDIPlus_ImageGetGraphicsContext($resizedbitmap) _GDIPLUS_GraphicsSetInterpolationMode($graphics2, $InterpolationModeHighQualityBicubic) _GDIPlus_GraphicsDrawImageRect($graphics2, $bitmap, 0, 0, $iWidth, $iHeight) ;;; - CODE THAT REQUIRES UPDATE Local $locImgFile = "C:\temp\" & _GetImageFolderPathFromId($iID, '') $save_result = _GDIPlus_ImageSaveToFile($resizedbitmap, $locImgFile) ;saves to temp file PostImage(_GetImageFolderPathFromId($iID, '/'), FileRead($locImgFile)) ;file reads image and uploads to http server FileDelete($locImgFile) ;deletes image when done ;;; - END CODE THAT REQUIRES UPDATE _GDIPlus_GraphicsDispose($graphics) _GDIPlus_GraphicsDispose($graphics2) _GDIPlus_BitmapDispose($bitmap) _GDIPlus_BitmapDispose($resizedbitmap) return $save_result EndFunc ;==>_SavehBitmapEx Func _GetImageFolderPathFromId($id, $sep = '\') Local $aLastImageSplit $aLastImageSplit = StringSplit(String($id), '') $return = $aLastImageSplit[1] & $aLastImageSplit[2] & $aLastImageSplit[3] & $sep & _ $aLastImageSplit[4] & $aLastImageSplit[5] & $aLastImageSplit[6] & $sep & _ $aLastImageSplit[7] & $aLastImageSplit[8] & $aLastImageSplit[9] & $sep & _ $aLastImageSplit[10] & $aLastImageSplit[11] & $aLastImageSplit[12] & '.jpg' return $return EndFunc Func _GDIPlus_SaveImage2BinaryString($hBitmap, $iQuality = 100) ;coded by Andreik, modified by UEZ Local $sImgCLSID = _GDIPlus_EncodersGetCLSID("jpg") Local $tGUID = _WinAPI_GUIDFromString($sImgCLSID) Local $pEncoder = DllStructGetPtr($tGUID) Local $tParams = _GDIPlus_ParamInit(1) Local $tData = DllStructCreate("int Quality") DllStructSetData($tData, "Quality", $iQuality) ;quality 0-100 Local $pData = DllStructGetPtr($tData) _GDIPlus_ParamAdd($tParams, $GDIP_EPGQUALITY, 1, $GDIP_EPTLONG, $pData) Local $pParams = DllStructGetPtr($tParams) Local $hStream = DllCall("ole32.dll", "uint", "CreateStreamOnHGlobal", "ptr", 0, "bool", True, "ptr*", 0) ;http://msdn.microsoft.com/en-us/library/ms864401.aspx If @error Then Return SetError(1, 0, 0) $hStream = $hStream[3] DllCall($ghGDIPDll, "uint", "GdipSaveImageToStream", "ptr", $hBitmap, "ptr", $hStream, "ptr", $pEncoder, "ptr", $pParams) _GDIPlus_BitmapDispose($hBitmap) Local $hMemory = DllCall("ole32.dll", "uint", "GetHGlobalFromStream", "ptr", $hStream, "ptr*", 0) ;http://msdn.microsoft.com/en-us/library/aa911736.aspx If @error Then Return SetError(2, 0, 0) $hMemory = $hMemory[2] Local $iMemSize = _MemGlobalSize($hMemory) Local $pMem = _MemGlobalLock($hMemory) $tData = DllStructCreate("byte[" & $iMemSize & "]", $pMem) Local $bData = DllStructGetData($tData, 1) Local $tVARIANT = DllStructCreate("word vt;word r1;word r2;word r3;ptr data;ptr") Local $aCall = DllCall("oleaut32.dll", "long", "DispCallFunc", "ptr", $hStream, "dword", 8 + 8 * @AutoItX64, "dword", 4, "dword", 23, "dword", 0, "ptr", 0, "ptr", 0, "ptr", DllStructGetPtr($tVARIANT)) ;http://msdn.microsoft.com/en-us/library/windows/desktop/ms221473(v=vs.85).aspx _MemGlobalFree($hMemory) Return $bData EndFunc ;==>_GDIPlus_SaveImage2BinaryString The above method works but I would like to change it. It saves a temp jpg, file reads it, uploads to server and deletes the temp file. I would like to utilize the "_GDIPlus_SaveImage2BinaryString" method instead rather than using a temp file. Changing: ;;; - CODE THAT REQUIRES UPDATE Local $locImgFile = "C:\temp\" & _GetImageFolderPathFromId($iID, '') $save_result = _GDIPlus_ImageSaveToFile($resizedbitmap, $locImgFile) ;saves to temp file PostImage(_GetImageFolderPathFromId($iID, '/'), FileRead($locImgFile)) ;file reads image and uploads to http server FileDelete($locImgFile) ;deletes image when done ;;; - END CODE THAT REQUIRES UPDATE To: ;;; - CODE THAT REQUIRES UPDATE PostImage(_GetImageFolderPathFromId($iID, '/'), _GDIPlus_SaveImage2BinaryString($resizedbitmap)) ;sends binary image to server directly ;;; - END CODE THAT REQUIRES UPDATE Does NOT work. Anyone have any ideas? Thanks in advance. [font="Impact"] I always thought dogs laid eggs, and I learned something today. [/font] Share this post Link to post Share on other sites UEZ Fyi: _ScreenCapture_Capture() produces a WinAPI bitmap not a GDI+ bitmap. That means you have to convert that image to a GDI+ image -> _GDIPlus_BitmapCreateHBITMAPFromBitmap() PostImage(_GetImageFolderPathFromId($iID, '/'), _GDIPlus_SaveImage2BinaryString($resizedbitmap)) ;sends binary image to server dir _GDIPlus_SaveImage2BinaryString() returns a binary string. What do you get and what is the binary size? Further what is PostImage() doing exactly? Br, UEZ Please don't send me any personal message and ask for support! I will not reply! Selection of finest graphical examples at Codepen.io The own fart smells best! Her 'sikim hıyar' diyene bir avuç tuz alıp koşma! ¯\_(ツ)_/¯  ٩(●̮̮̃•̃)۶ ٩(-̮̮̃-̃)۶ૐ Share this post Link to post Share on other sites UEZ @mikell: I think his question was how to take a screenshot and upload it to any cloud services without saving the screenshot to hd first. Br, UEZ Please don't send me any personal message and ask for support! I will not reply! Selection of finest graphical examples at Codepen.io The own fart smells best! Her 'sikim hıyar' diyene bir avuç tuz alıp koşma! ¯\_(ツ)_/¯  ٩(●̮̮̃•̃)۶ ٩(-̮̮̃-̃)۶ૐ Share this post Link to post Share on other sites mikell Yes, I saw this after posting and that's why I deleted my post just before you answered, sorry :unsure: Edited by mikell Share this post Link to post Share on other sites ame1011 Thanks for responding UEZ. Post image looks like this: Func PostImage($iLoc, $img) Local $sChunk, $s_Data local $path = "sr_images_" & @YEAR & "_" & @MON & "/" & $user & "/" & StringLeft($iLoc, 11) local $file = StringRight($iLoc, 7) $hOpen = _WinHttpOpen("ImageUpload_"&$user&"_"&$iLoc) $hConnect = _WinHttpConnect($hOpen, $sURL) $hRequest = _WinHttpOpenRequest($hConnect, "POST", $sPURL) $sData = "" $sData &= '----------darker' & @CRLF $sData &= 'Content-Disposition: form-data; name="path"' & @CRLF & @CRLF $sData &= $path & @CRLF $sData &= '----------darker' & @CRLF $sData &= 'Content-Disposition: form-data; name="file_name"' & @CRLF & @CRLF $sData &= $file & @CRLF $sData &= '----------darker' & @CRLF $sData &= 'Content-Disposition: form-data; name="image"; filename="'&$file&'"' & @CRLF $sData &= 'Content-Type: image/jpg' & @CRLF & @CRLF $sData &= $img & @CRLF $sData &= '----------darker' & @CRLF _WinHttpSendRequest($hRequest, "Content-Type: multipart/form-data; boundary=--------darker", Binary($sData)) _WinHttpReceiveResponse($hRequest) _WinHttpCloseHandle($hRequest) _WinHttpCloseHandle($hConnect) _WinHttpCloseHandle($hOpen) EndFunc Am I not converting the bitmap properly with the following lines? $hbitmap = _ScreenCapture_Capture('', $iScreenCapDimensions[1] , $iScreenCapDimensions[2], $iScreenCapDimensions[3], $iScreenCapDimensions[4]) $bitmap = _GDIPlus_BitmapCreateFromHBITMAP($hbitmap) $graphics = _GDIPlus_ImageGetGraphicsContext($bitmap) $resizedbitmap = _GDIPlus_BitmapCreateFromGraphics($iWidth, $iHeight, $graphics) $graphics2 = _GDIPlus_ImageGetGraphicsContext($resizedbitmap) _GDIPLUS_GraphicsSetInterpolationMode($graphics2, $InterpolationModeHighQualityBicubic) _GDIPlus_GraphicsDrawImageRect($graphics2, $bitmap, 0, 0, $iWidth, $iHeight) $bImage = _GDIPlus_SaveImage2BinaryString($resizedbitmap) I would like to keep these settings (width, height, interpolationMode, etc.) when I transfer the image via http [font="Impact"] I always thought dogs laid eggs, and I learned something today. [/font] Share this post Link to post Share on other sites UEZ I would use rather this way: $hbitmap = _ScreenCapture_Capture('', $iScreenCapDimensions[1] , $iScreenCapDimensions[2], $iScreenCapDimensions[3], $iScreenCapDimensions[4]) $bitmap = _GDIPlus_BitmapCreateFromHBITMAP($hbitmap) $resizedbitmap = _GDIPlus_BitmapCreateFromScan0($iWidth, $iHeight) $graphics = _GDIPlus_ImageGetGraphicsContext($resizedbitmap) _GDIPLUS_GraphicsSetInterpolationMode($graphics, $InterpolationModeHighQualityBicubic) _GDIPlus_GraphicsDrawImageRect($graphics, $bitmap, 0, 0, $iWidth, $iHeight) $bImage = _GDIPlus_SaveImage2BinaryString($resizedbitmap) Br, UEZ Please don't send me any personal message and ask for support! I will not reply! Selection of finest graphical examples at Codepen.io The own fart smells best! Her 'sikim hıyar' diyene bir avuç tuz alıp koşma! ¯\_(ツ)_/¯  ٩(●̮̮̃•̃)۶ ٩(-̮̮̃-̃)۶ૐ Share this post Link to post Share on other sites ame1011 I implemented the code above. The size of the images using my initial method (SaveToFile, FileRead, HTTPPost, FileDelete) are ~75-100KB. The size of the images using this binary method are ~200KB and they also do not work. When viewing them in windows, I get an error "Windows Photo Viewer can't open this picture because the file appears to be damaged, corrupted, or is too large". Any ideas? [font="Impact"] I always thought dogs laid eggs, and I learned something today. [/font] Share this post Link to post Share on other sites Create an account or sign in to comment You need to be a member in order to leave a comment Create an account Sign up for a new account in our community. It's easy! Register a new account Sign in Already have an account? Sign in here. Sign In Now Sign in to follow this   • Similar Content • ammaul By ammaul Hi folks, I'm having problems with a screenshot capture script. Let me explain. Everyday I (and my colleagues at work) need to take some screenshots from a web-page. These screenshots are used to compile a report. Normally, I (and others) used to log in into the website and took screenshots of desired graphics and tables. This is tediuos and time consuming. To easy this task I made a script using autoit that basically logs into the website (user and password) and using some clicks, stroke send, coordinates, it is able to generate the graphics and save them to some folders into our network (this script saves arouund 50 pics. It works like a sharm. In order to make things easier, I tried to schedule this script (compiled to a Screnpics.exe file) using task scheduler from windows. We already use this (task scheduler) to run some vbs scripts, some vba excel scripts and so on. The computer used for this tasks is a windows 7 desktop computer. Due to security policies, the computer locks after some time. All this tasks run in the locked computer. My script screenpics.exe runs also from this locked computer. When the computer is unlocked, it does everything as expected. But, when it is locked, all the "pics" are BLACK. As I understand, it runs ok, but, as the "windows" are innactive, it prints what it "sees": a black rectangular. Some details: The web-page with hold the information I need, it only works in Firefox and, because of this it couldn`t be managed by vba or some "getobject" like commands. In fact, it has some flash things that make it impossible to control programatically. So my script is based on mouse move to coordinates, mouse click, screen capture and so one.   So, I read many posts trying to figure out a way to overcome this, but... nothing came to mind. My first idea was try to unlock windows. Theses lead me to some posts with no solution. This is worse because I'm not a computer admin, so procedures that need to replace/change the register are not an option.   If someone has any idea, I'll be gratefull. • Bilgus By Bilgus Draw Path Points allows you to make line paths for drawing with gdi You can even load an image and trace the outline Save and load functionality undo and redo zoom and scale; Don't Forget Rotate! ;Draw Path Points BILGUS 2018 ;Includes #include <File.au3> #include <GDIPlus.au3> #include <GUIConstants.au3> #include <GuiEdit.au3> #include <GuiListView.au3> #include <GuiTab.au3> #include <Misc.au3> If OnAutoItExitRegister("_Exit") <> 0 Then _GDIPlus_Startup() ;initialize GDI+ ConsoleWrite("GDI+ Started" & @CRLF) EndIf Opt("MouseCoordMode", 2) ;Relative coords to the client area of the active window Opt("PixelCoordMode", 2) ;Relative coords to the client area of the active window Global $g_iXScale = 8 Global $g_iYScale = $g_iXScale Global $g_sFileSave = @ScriptDir & "\DrawPath.txt" ;Default Global $g_bClosePath = False Global $g_bShowImage = False Global $g_sImagefile = "" Global $g_iUndo_Max = 50 Global $g_asUndo_Files[1] = [""] Global $g_asRedo_Files[1] = [""] Global $g_aPath_Points[1][2] = [[0, 0]] Global $g_aPath_Rot_Points Global $g_hForm1 = GUICreate("Draw Path Points", 615, 437, 192, 124) Global $g_hSelSquare = GUICtrlCreateLabel("", 0, 0, 0, 0, $SS_BLACKFRAME, $WS_EX_TOPMOST) GUICtrlSetState(-1, $GUI_HIDE) ;------------------------------------------------------------------------------- Global Enum $eC1_delete, $eC1_del_all, $eC1_update, $eC1_shift_dn, $eC1_shift_up, _ $eC1_closepath, $eC1_showimg, $eC1_lock, $eC1_undo, $eC1_redo, $aCtl1_LAST Global $g_ahCtl1[$aCtl1_LAST] Control_Create_Group1() ;------------------------------------------------------------------------------- Global Enum $eC2_zin, $eC2_zout, $eC2_dgroup, $eC2_decx, $eC2_incx, $eC2_decy, _ $eC2_incy, $eC2_edit_rot, $eC2_rot, $eC2_ud_rot, $eC2_rev, $eC2_toall, $aCtl2_LAST Global $g_ahCtl2[$aCtl2_LAST] Control_Create_Group2() ;------------------------------------------------------------------------------- Global $g_hBtn_load = GUICtrlCreateButton("Load", 5, 1, 35, 20) Global $g_hBtn_save = GUICtrlCreateButton("Save", 40, 1, 35, 20) Global $g_hBtn_arr_disp = GUICtrlCreateButton("Array", 75, 1, 35, 20) Global $g_hEdit_encoded = GUICtrlCreateEdit("", 115, 2, 50, 18, $ES_READONLY + $ES_AUTOHSCROLL, $WS_EX_STATICEDGE + $WS_EX_TRANSPARENT) Global $g_hList1 = GUICtrlCreateListView("#|x|y", 5, 24, 161, 201, $LVS_SHOWSELALWAYS Or $LVS_SINGLESEL) Global $g_hList1_LVN = GUICtrlCreateDummy() ;listview notifications Global $g_hImage1 = GUICtrlCreatePic("", 200, 16, 400, 400, -1, $WS_EX_LAYERED) Global $g_hTab1 = GUICtrlCreateTab(1, 225, 20, 500, $TCS_VERTICAL) GUICtrlCreateTabItem(" ") GUICtrlSetState(-1, $GUI_SHOW) ; will be display first GUICtrlCreateTabItem(" ") GUICtrlCreateTabItem("") ; end tabitem definition For $i = 0 To UBound($g_ahCtl2) - 1 GUICtrlSetState($g_ahCtl2[$i], $GUI_HIDE) Next GUIRegisterMsg($WM_NOTIFY, "WM_NOTIFY") List_Update() List_Index() GUISetState(@SW_SHOW) State_Save($g_aPath_Points) Points_Update($g_aPath_Points) Global $g_nMsg = 0 While 1 $g_nMsg = GUIGetMsg() If $g_nMsg > 0 And $g_nMsg <> $g_ahCtl2[$eC2_ud_rot] And $g_nMsg <> $g_ahCtl2[$eC2_edit_rot] And GUICtrlRead($g_ahCtl2[$eC2_edit_rot]) <> 0 Then ;ConsoleWrite("Cancel_Rotate? " & $g_nMsg & @CRLF) If MsgBox($MB_ICONQUESTION + $MB_OKCANCEL + $MB_DEFBUTTON2, "Save?", "Save Rotated Points?", 10) == $IDOK Then _GUICtrlEdit_SetText($g_ahCtl2[$eC2_edit_rot], "0") $g_aPath_Points = $g_aPath_Rot_Points State_Save($g_aPath_Points) List_Update(List_Index()) Else _GUICtrlEdit_SetText($g_ahCtl2[$eC2_edit_rot], "0") Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage, List_Index()) ;, True) EndIf EndIf Switch $g_nMsg Case $GUI_EVENT_CLOSE Exit Case $g_hTab1 Tab1_Select() Case $g_hSelSquare ;ConsoleWrite("SelSquare" & @CRLF) SelSquare_Drag() Case $g_hList1 ;ConsoleWrite("List1 " & $g_nMsg & @CRLF) Local $iIndex = List_Index() If $iIndex <> -1 Then Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage, $iIndex) _GUICtrlListView_SetItemSelected($g_hList1, $iIndex, True, True) EndIf Case $g_hList1_LVN Point_Selected($g_aPath_Points, $g_hImage1, List_Index()) Case $g_hList1_LVN Case $g_hImage1 Image1_Clicked() Case $g_hBtn_save Btn_save_Clicked() Case $g_hBtn_load Btn_load_Clicked() Case $g_hBtn_arr_disp _ArrayDisplay($g_aPath_Points) ;------------------------------------------------------------------- Case $g_ahCtl1[$eC1_shift_up] $iIndex = List_Index() Point_Swap($iIndex, -1) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_shift_dn] $iIndex = List_Index() Point_Swap($iIndex, 1) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_del_all] $g_aPath_Points = 0 Global $g_aPath_Points[1][2] = [[0, 0]] List_Update() Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_delete] $iIndex = List_Index() Point_Delete($iIndex) _GUICtrlListView_ClickItem($g_hList1, $iIndex) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_redo] $g_aPath_Points = State_Restore($g_aPath_Points, False) List_Update() Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_undo] $g_aPath_Points = State_Restore($g_aPath_Points, True) List_Update() Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Case $g_ahCtl1[$eC1_update] List_Update(List_Index()) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Ascii_Points($g_aPath_Points) _GUICtrlEdit_SetSel($g_hEdit_encoded, 0, -1) Case $g_ahCtl1[$eC1_closepath] $g_bClosePath = Control_IsChecked($g_ahCtl1[$eC1_closepath]) List_Update(List_Index()) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) If $g_bClosePath Then GUICtrlSetState($g_ahCtl2[$eC2_rev], $GUI_DISABLE) Else GUICtrlSetState($g_ahCtl2[$eC2_rev], $GUI_ENABLE) EndIf Case $g_ahCtl1[$eC1_showimg] If Not $g_bShowImage Then $g_sImagefile = FileOpenDialog("Select an image", SplitDir($g_sImagefile), "All Files(*.*)", 0, SplitFileName($g_sImagefile), $g_hForm1) EndIf $g_bShowImage = Control_IsChecked($g_ahCtl1[$eC1_showimg]) List_Update(List_Index()) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) ;------------------------------------------------------------------- Case $g_ahCtl2[$eC2_zin] If Control_IsChecked($g_ahCtl2[$eC2_toall]) Then Points_Scale(1, 1) Else $g_iXScale += 1 $g_iYScale += 1 Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndIf _GUICtrlListView_ClickItem($g_hList1, List_Index()) Case $g_ahCtl2[$eC2_zout] If Control_IsChecked($g_ahCtl2[$eC2_toall]) Then Points_Scale(-1, -1) Else $g_iXScale -= 1 $g_iYScale -= 1 Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndIf _GUICtrlListView_ClickItem($g_hList1, List_Index()) Case $g_ahCtl2[$eC2_rev] $iIndex = List_Index() $g_aPath_Points = Points_Reverse($g_aPath_Points) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage, $iIndex) ;, True) List_Update($iIndex) Case $g_ahCtl2[$eC2_edit_rot] ;ConsoleWrite("Rotate" & @CRLF) $iIndex = List_Index() Local $iDegrees = GUICtrlRead($g_ahCtl2[$eC2_edit_rot]) $g_aPath_Rot_Points = Points_Rotate($g_aPath_Points, $iDegrees) Points_Update($g_aPath_Rot_Points, $g_bClosePath, $g_bShowImage, $iIndex) ;, True) Case $g_ahCtl2[$eC2_incx] Point_Adjust(1, 0, Control_IsChecked($g_ahCtl2[$eC2_toall])) Case $g_ahCtl2[$eC2_decx] Point_Adjust(-1, 0, Control_IsChecked($g_ahCtl2[$eC2_toall])) Case $g_ahCtl2[$eC2_incy] Point_Adjust(0, 1, Control_IsChecked($g_ahCtl2[$eC2_toall])) Case $g_ahCtl2[$eC2_decy] Point_Adjust(0, -1, Control_IsChecked($g_ahCtl2[$eC2_toall])) EndSwitch WEnd ;---------------------------------------------------------------------------------------------------- Func _Exit() _GDIPlus_Shutdown() ConsoleWrite("GDI+ Stopped" & @CRLF) State_Destroy() EndFunc ;==>_Exit Func Ascii_Points($aPts) ;encodes points into an ascii string Local Const $iChrOffset = 33 Local Const $iMaxOffset = 126 - $iChrOffset If Not IsArray($aPts) Then Return Local $sAscEnc = StringFormat("%03i%05i", $iChrOffset, UBound($aPts) * 2 + 8) If _ArrayMin($aPts) >= 0 And (_ArrayMax($aPts) - _ArrayMin($aPts)) <= $iMaxOffset Then For $i = 0 To UBound($aPts) - 1 $sAscEnc = $sAscEnc & Chr($aPts[$i][0] + $iChrOffset) & Chr($aPts[$i][1] + $iChrOffset) Next EndIf _GUICtrlEdit_SetText($g_hEdit_encoded, $sAscEnc) EndFunc ;==>Ascii_Points Func Btn_load_Clicked() ConsoleWrite("Load: " & SplitDir($g_sFileSave) & @CRLF) $g_sFileSave = FileOpenDialog("Select a save file", SplitDir($g_sFileSave), "All Files(*.*)", 0, SplitFileName($g_sFileSave), $g_hForm1) _FileReadToArray($g_sFileSave, $g_aPath_Points, 0, ",") If @error Then Dim $g_aPath_Points[1][2] = [[0, 0]] Else State_Destroy() State_Save($g_aPath_Points) EndIf List_Update() Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndFunc ;==>Btn_load_Clicked Func Btn_save_Clicked() List_Update(List_Index()) $g_sFileSave = FileOpenDialog("Select a save file", SplitDir($g_sFileSave), "All Files(*.*)", 0, SplitFileName($g_sFileSave), $g_hForm1) _FileWriteFromArray($g_sFileSave, $g_aPath_Points, 0, Default, ",") EndFunc ;==>Btn_save_Clicked Func Control_Create_Group1() Local $iX = 30 Local $iY = 225 $g_ahCtl1[$eC1_delete] = GUICtrlCreateButton("Delete", $iX + 0, $iY + 0, 50, 20) $g_ahCtl1[$eC1_shift_dn] = GUICtrlCreateButton("+", $iX + 70, $iY + 0, 20, 20) $g_ahCtl1[$eC1_shift_up] = GUICtrlCreateButton("-", $iX + 95, $iY + 0, 20, 20) $g_ahCtl1[$eC1_del_all] = GUICtrlCreateButton("Delete All", $iX + 0, $iY + 25, 50, 20) $g_ahCtl1[$eC1_update] = GUICtrlCreateButton("Update", $iX + 70, $iY + 25, 50, 20) $g_ahCtl1[$eC1_undo] = GUICtrlCreateButton("Undo", $iX + 0, $iY + 50, 50, 20) $g_ahCtl1[$eC1_redo] = GUICtrlCreateButton("Redo", $iX + 70, $iY + 50, 50, 20) GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_DISABLE) GUICtrlSetState($g_ahCtl1[$eC1_redo], $GUI_DISABLE) $g_ahCtl1[$eC1_closepath] = GUICtrlCreateCheckbox("Complete", $iX + 0, $iY + 70, 65, 25) $g_ahCtl1[$eC1_showimg] = GUICtrlCreateCheckbox("Image", $iX + 0, $iY + 90, 65, 25) $g_ahCtl1[$eC1_lock] = GUICtrlCreateCheckbox("Locked", $iX + 0, $iY + 110, 65, 25) EndFunc ;==>Control_Create_Group1 Func Control_Create_Group2() Local $iX = 30 Local $iY = 225 $g_ahCtl2[$eC2_rev] = GUICtrlCreateButton("Reverse", $iX + 0, $iY + 0, 50, 20) $g_ahCtl2[$eC2_edit_rot] = GUICtrlCreateInput("0", $iX + 0, $iY + 25, 40, 20) $g_ahCtl2[$eC2_ud_rot] = GUICtrlCreateUpdown(-1) GUICtrlSetLimit($eC2_ud_rot, 360, -360) $g_ahCtl2[$eC2_rot] = GUICtrlCreateButton("", $iX + 40, $iY + 25, 10, 20) $g_ahCtl2[$eC2_dgroup] = GUICtrlCreateGroup("Coords", 5 + $iX + 70, $iY + 0, 55, 70) $g_ahCtl2[$eC2_decy] = GUICtrlCreateButton("-", 24 + $iX + 70, 16 + $iY + 0, 17, 17) $g_ahCtl2[$eC2_incy] = GUICtrlCreateButton("+", 24 + $iX + 70, 48 + $iY + 0, 17, 17) $g_ahCtl2[$eC2_decx] = GUICtrlCreateButton("-", 8 + $iX + 70, 32 + $iY + 0, 17, 17) $g_ahCtl2[$eC2_incx] = GUICtrlCreateButton("+", 40 + $iX + 70, 32 + $iY + 0, 17, 17) GUICtrlCreateGroup("", -99, -99, 1, 1) $g_ahCtl2[$eC2_zout] = GUICtrlCreateButton("Zoom -", $iX + 0, $iY + 75, 50, 20) $g_ahCtl2[$eC2_zin] = GUICtrlCreateButton("Zoom +", $iX + 75, $iY + 75, 50, 20) $g_ahCtl2[$eC2_toall] = GUICtrlCreateCheckbox("Apply to all", $iX + 0, $iY + 100, 80, 25) EndFunc ;==>Control_Create_Group2 Func Control_IsChecked($IdCtrl) Return (BitAND(GUICtrlRead($IdCtrl), $GUI_CHECKED) = $GUI_CHECKED) EndFunc ;==>Control_IsChecked Func GDI_Draw_ArrayPoints(ByRef $aPts, $hImage, $g_bClosePath, $iX, $iY, $sFileName, $iSelected = -1) Local $hWnd = GUICtrlGetHandle($hImage) If UBound($aPts) > 1 Then Local $aPoints = GDI_Points($aPts, $iX, $iY) Else Local $aPoints[1][2] = [[0, 0]] EndIf Local $hGraphics = _GDIPlus_GraphicsCreateFromHWND($hWnd) ;create a graphics object from a window handle _GDIPlus_GraphicsClear($hGraphics, 0xFFFFFFFF) If FileExists($sFileName) Then Local $hBitmap = _GDIPlus_BitmapCreateFromFile($sFileName) Local Const $iWidth = ScaleX(_GDIPlus_ImageGetWidth($hBitmap)) Local Const $iHeight = ScaleY(_GDIPlus_ImageGetHeight($hBitmap)) Local $hBitmap_Scaled = _GDIPlus_ImageResize($hBitmap, $iWidth, $iHeight) ;resize image _GDIPlus_BitmapDispose($hBitmap) ;Done with initial bitmap _GDIPlus_GraphicsDrawImage($hGraphics, $hBitmap_Scaled, 0, 0) _GDIPlus_BitmapDispose($hBitmap_Scaled) EndIf Local $hPen = _GDIPlus_PenCreate(0xFFFF0000, ScaleX(1)) Local $hEndCap = _GDIPlus_ArrowCapCreate(1, 1) _GDIPlus_PenSetCustomEndCap($hPen, $hEndCap) If $g_bClosePath Then _GDIPlus_GraphicsDrawPolygon($hGraphics, $aPoints, $hPen) Local $iX0, $iY0, $iX1, $iY1 For $i = 1 To $aPoints[0][0] If Not $g_bClosePath And $i < $aPoints[0][0] Then $iX0 = $aPoints[$i][0] $iY0 = $aPoints[$i][1] $iX1 = $aPoints[$i + 1][0] $iY1 = $aPoints[$i + 1][1] _GDIPlus_GraphicsDrawLine($hGraphics, $iX0, $iY0, $iX1, $iY1, $hPen) EndIf Next _GDIPlus_ArrowCapDispose($hEndCap) _GDIPlus_PenDispose($hPen) _GDIPlus_GraphicsDispose($hGraphics) Point_Selected($aPts, $hImage, $iSelected) EndFunc ;==>GDI_Draw_ArrayPoints Func GDI_Line_hPath_From_Points($aPts, $iXorig, $iYorig) ;Returns hpath object be sure to delete it when finished Local $hPath = _GDIPlus_PathCreate() ;Create new path object Local $aPoints = GDI_Points($aPts, $iXorig, $iYorig) Local $iX0, $iY0, $iX1, $iY1 If IsArray($aPoints) Then For $i = 1 To $aPoints[0][0] - 1 $iX0 = $aPoints[$i][0] $iY0 = $aPoints[$i][1] $iX1 = $aPoints[$i + 1][0] $iY1 = $aPoints[$i + 1][1] _GDIPlus_PathAddLine($hPath, $iX0, $iY0, $iX1, $iY1) Next EndIf Return $hPath ;_GDIPlus_PathDispose($hPath) EndFunc ;==>GDI_Line_hPath_From_Points Func GDI_Points($aPts, $iXo, $iYO) Local $aGDIPts If IsArray($aPts) And UBound($aPts) > 1 Then Local $aGDIPts[UBound($aPts) + 1][2] $aGDIPts[0][0] = UBound($aPts) For $i = 1 To $aGDIPts[0][0] ;Build points list $aGDIPts[$i][0] = ScaleX($aPts[$i - 1][0]) + $iXo $aGDIPts[$i][1] = ScaleY($aPts[$i - 1][1]) + $iYO Next Else Local $aGDIPts[1][2] = [[0, 0]] EndIf Return $aGDIPts EndFunc ;==>GDI_Points Func Image1_Clicked() If Not Control_IsChecked($g_ahCtl1[$eC1_lock]) Then Local $aCPos = ControlGetPos(GUICtrlGetHandle($g_hImage1), "", 0) Local $aPos = MouseGetPos() If IsArray($aPos) And IsArray($aCPos) Then Local $iXn = Int(($aPos[0] - $aCPos[0] + ScaleX(1) / 2) / ScaleX(1)) Local $iYn = Int(($aPos[1] - $aCPos[1] + ScaleY(1) / 2) / ScaleY(1)) Point_Add(List_Index(), $iXn, $iYn) EndIf Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) Else ToolTip("Locked") Sleep(500) ToolTip("") EndIf EndFunc ;==>Image1_Clicked Func List_Index() Static Local $hWndList1 = GUICtrlGetHandle($g_hList1) Local $iIndex = _GUICtrlListView_GetSelectionMark($hWndList1) If _GUICtrlListView_GetItemSelected($g_hList1, $iIndex) Then Return $iIndex Return -1 EndFunc ;==>List_Index Func List_Update($iIndex = -1) Static $hWnd_List1 = GUICtrlGetHandle($g_hList1) _GUICtrlListView_BeginUpdate($g_hList1) _GUICtrlListView_DeleteAllItems($hWnd_List1) For $i = 0 To UBound($g_aPath_Points) - 1 GUICtrlCreateListViewItem($i & "|" & $g_aPath_Points[$i][0] & "|" & $g_aPath_Points[$i][1], $g_hList1) Next If $iIndex > -1 Then _GUICtrlListView_ClickItem($g_hList1, $iIndex) _GUICtrlListView_EnsureVisible($g_hList1, $iIndex) EndIf _GUICtrlListView_EndUpdate($g_hList1) EndFunc ;==>List_Update Func Point_Add($iIndex, $iX, $iY) If $iIndex <> -1 Then _ArrayInsert($g_aPath_Points, $iIndex, $iX & "|" & $iY, 0) _GUICtrlListView_InsertItem($g_hList1, $iIndex, $iIndex) _GUICtrlListView_SetItemText($g_hList1, $iIndex, $iX, 1) _GUICtrlListView_SetItemText($g_hList1, $iIndex, $iY, 2) _GUICtrlListView_EnsureVisible($g_hList1, $iIndex) Else _ArrayAdd($g_aPath_Points, $iX & "|" & $iY, 0) GUICtrlCreateListViewItem(UBound($g_aPath_Points) - 1 & "|" & $iX & "|" & $iY, $g_hList1) _GUICtrlListView_EnsureVisible($g_hList1, UBound($g_aPath_Points) - 1) EndIf State_Save($g_aPath_Points) EndFunc ;==>Point_Add Func Point_Adjust($iX, $iY, $bToAll) If Not $bToAll Then Local $iIndex = List_Index() If $iIndex == -1 And IsArray($g_aPath_Points) Then $iIndex = UBound($g_aPath_Points) - 1 If $iIndex == -1 Then Return $g_aPath_Points[$iIndex][0] += $iX $g_aPath_Points[$iIndex][1] += $iY _GUICtrlListView_SetItemText($g_hList1, $iIndex, $g_aPath_Points[$iIndex][0], 1) _GUICtrlListView_SetItemText($g_hList1, $iIndex, $g_aPath_Points[$iIndex][1], 2) If $iIndex <> UBound($g_aPath_Points) - 1 Then _GUICtrlListView_ClickItem($g_hList1, $iIndex) Else For $i = 0 To UBound($g_aPath_Points) - 1 $g_aPath_Points[$i][0] += $iX $g_aPath_Points[$i][1] += $iY Next List_Update(List_Index()) EndIf State_Save($g_aPath_Points) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndFunc ;==>Point_Adjust Func Point_Delete($iIndex) If $iIndex <> -1 Then _ArrayDelete($g_aPath_Points, $iIndex) _GUICtrlListView_DeleteItem($g_hList1, $iIndex) State_Save($g_aPath_Points) ;List_Update($iIndex) EndIf EndFunc ;==>Point_Delete Func Point_Modify($iIndex, $iX, $iY) If $iIndex <> -1 Then $g_aPath_Points[$iIndex][0] = $iX $g_aPath_Points[$iIndex][1] = $iY _GUICtrlListView_SetItemText($g_hList1, $iIndex, $iX, 1) _GUICtrlListView_SetItemText($g_hList1, $iIndex, $iY, 2) _GUICtrlListView_EnsureVisible($g_hList1, $iIndex) State_Save($g_aPath_Points) EndIf EndFunc ;==>Point_Modify Func Point_Selected($aPts, $hImage, $iIndex) If $iIndex > -1 Then GUICtrlSetState($g_hSelSquare, $GUI_HIDE) Local $hWnd = GUICtrlGetHandle($hImage) Local $aPos = ControlGetPos($hWnd, "", 0) If IsArray($aPos) And IsArray($aPts) Then _WinAPI_RedrawWindow($hWnd, Default, Default, $RDW_ERASENOW) Local $iXs = ScaleX($aPts[$iIndex][0]) + $aPos[0] - ScaleX(1) / 2 Local $iYs = ScaleY($aPts[$iIndex][1]) + $aPos[1] - ScaleY(1) / 2 WinMove(GUICtrlGetHandle($g_hSelSquare), "", $iXs, $iYs, ScaleX(1), ScaleY(1)) GUICtrlSetState($g_hSelSquare, $GUI_SHOW) ;ConsoleWrite("Point_Selected" & @CRLF) Else ConsoleWriteError("Error: Point_Selected" & @CRLF) EndIf EndIf EndFunc ;==>Point_Selected Func Point_Swap($iIndex1, $iNext) _GUICtrlListView_BeginUpdate($g_hList1) Local $iIndex2 = 0 Local $aTmp = 0 If $iIndex1 <> -1 Then $iIndex2 = $iIndex1 + $iNext If $iIndex2 > UBound($g_aPath_Points) - 1 Then $iIndex2 = 0 ElseIf $iIndex2 < 0 Then $iIndex2 = UBound($g_aPath_Points) - 1 EndIf _ArraySwap($g_aPath_Points, $iIndex1, $iIndex2) Local $iX1 = _GUICtrlListView_GetItemText($g_hList1, $iIndex1, 2) Local $iY1 = _GUICtrlListView_GetItemText($g_hList1, $iIndex1, 2) Local $iX2 = _GUICtrlListView_GetItemText($g_hList1, $iIndex2, 2) Local $iY2 = _GUICtrlListView_GetItemText($g_hList1, $iIndex2, 2) _GUICtrlListView_SetItemText($g_hList1, $iIndex1, $iX2, 1) _GUICtrlListView_SetItemText($g_hList1, $iIndex1, $iY2, 2) _GUICtrlListView_SetItemText($g_hList1, $iIndex2, $iX1, 1) _GUICtrlListView_SetItemText($g_hList1, $iIndex2, $iY1, 2) _GUICtrlListView_ClickItem($g_hList1, $iIndex2) _GUICtrlListView_EnsureVisible($g_hList1, $iIndex2) Else ;ConsoleWrite("Array Shift" & @CRLF) If $iNext > 0 Then ;ARRAY SHIFT -- Melba23 Local $iUBound = UBound($g_aPath_Points) ; Get size of array $aTmp = _ArrayExtract($g_aPath_Points, 0, $iUBound - 2) ; Extract all but last _ArrayInsert($aTmp, 0, _ArrayExtract($g_aPath_Points, $iUBound - 1, Default)) ; Insert last at top $g_aPath_Points = $aTmp Else $aTmp = _ArrayExtract($g_aPath_Points, 1, Default) ; Extract all but top row _ArrayAdd($aTmp, _ArrayExtract($g_aPath_Points, 0, 0)) ; Add top row at bottom $g_aPath_Points = $aTmp EndIf List_Update(List_Index()) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndIf State_Save($g_aPath_Points) _GUICtrlListView_EndUpdate($g_hList1) EndFunc ;==>Point_Swap Func Points_Reverse($aPts) Local $hPath = GDI_Line_hPath_From_Points($aPts, 0, 0) ;_GDIPlus_PathFlatten($hPath) _GDIPlus_PathReverse($hPath) Local $aPoints = _GDIPlus_PathGetPoints($hPath) _GDIPlus_PathDispose($hPath) If IsArray($aPoints) Then ;ConsoleWrite("Flipped_Points" & @CRLF) Global $aPts_Rev[$aPoints[0][0]][2] For $i = 1 To $aPoints[0][0] $aPts_Rev[$i - 1][0] = Int($aPoints[$i][0] / ScaleX(1)) $aPts_Rev[$i - 1][1] = Int($aPoints[$i][1] / ScaleY(1)) Next Return $aPts_Rev Else Return $aPts EndIf EndFunc ;==>Points_Reverse Func Points_Rotate($aPts, $iDegrees) Local $hPath = GDI_Line_hPath_From_Points($aPts, 0, 0) ;_GDIPlus_PathFlatten($hPath) Local $hPen = _GDIPlus_PenCreate(0x0, ScaleX(1)) Local $aBounds = _GDIPlus_PathGetWorldBounds($hPath, 0, $hPen) _GDIPlus_PenDispose($hPen) If IsArray($aBounds) Then Local $hMatrix = _GDIPlus_MatrixCreate() _GDIPlus_MatrixTranslate($hMatrix, $aBounds[0] + $aBounds[2] / 2, $aBounds[1] + $aBounds[3] / 2) _GDIPlus_MatrixRotate($hMatrix, $iDegrees) _GDIPlus_MatrixTranslate($hMatrix, -($aBounds[0] + $aBounds[2] / 2), -($aBounds[1] + $aBounds[3] / 2)) _GDIPlus_PathTransform($hPath, $hMatrix) _GDIPlus_MatrixDispose($hMatrix) EndIf Local $aPoints = _GDIPlus_PathGetPoints($hPath) _GDIPlus_PathDispose($hPath) If IsArray($aPoints) Then ;ConsoleWrite("Rotate_Points" & @CRLF) Dim $aPts_Rev[$aPoints[0][0]][2] For $i = 1 To $aPoints[0][0] $aPts_Rev[$i - 1][0] = Int($aPoints[$i][0] / ScaleX(1)) $aPts_Rev[$i - 1][1] = Int($aPoints[$i][1] / ScaleY(1)) Next Return $aPts_Rev Else Return $aPts EndIf EndFunc ;==>Points_Rotate Func Points_Scale($iScaleX, $iScaleY) For $i = 0 To UBound($g_aPath_Points) - 1 If $iScaleX > 0 Then $g_aPath_Points[$i][0] *= 2 Else $g_aPath_Points[$i][0] /= 2 EndIf If $iScaleY > 0 Then $g_aPath_Points[$i][1] *= 2 Else $g_aPath_Points[$i][1] /= 2 EndIf Next State_Save($g_aPath_Points) List_Update(List_Index()) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage) EndFunc ;==>Points_Scale Func Points_Update($aPts, $g_bClosePath = False, $b_Show_Image = True, $iSelected = -1) Local $_Image_File = $g_sImagefile If Not $b_Show_Image Then $_Image_File = "" GDI_Draw_ArrayPoints($aPts, $g_hImage1, $g_bClosePath, 0, 0, $_Image_File, $iSelected) EndFunc ;==>Points_Update Func ScaleX($iX) Local $iXs = $g_iXScale If $iXs == 0 Then $iXs = -1 If $iXs > 0 Then $iX = $iX * $iXs Else $iX = $iX / Abs($iXs) EndIf Return $iX EndFunc ;==>ScaleX Func ScaleY($iY) Local $iYs = $g_iYScale If $iYs == 0 Then $iYs = -1 If $iYs > 0 Then $iY = $iY * $iYs Else $iY = $iY / Abs($iYs) EndIf Return $iY EndFunc ;==>ScaleY Func SelSquare_Drag() Local $iIndex = List_Index() If $iIndex <> -1 Then Local $cInfo = GUIGetCursorInfo($g_hForm1) Local $aPosSelOrig = ControlGetPos($g_hForm1, "", $g_hSelSquare) If IsArray($aPosSelOrig) Then Local $iSubtractX = $cInfo[0] - $aPosSelOrig[0] Local $iSubtractY = $cInfo[1] - $aPosSelOrig[1] EndIf If IsArray($cInfo) Then Do $cInfo = GUIGetCursorInfo($g_hForm1) ControlMove($g_hForm1, "", $g_hSelSquare, $cInfo[0] - $iSubtractX, $cInfo[1] - $iSubtractY) Until Not $cInfo[2] EndIf Local $aPosSelNew = ControlGetPos($g_hForm1, "", $g_hSelSquare) If IsArray($aPosSelNew) And IsArray($aPosSelOrig) Then Local $iXm = $g_aPath_Points[$iIndex][0] + Int(($aPosSelNew[0] - $aPosSelOrig[0]) / ScaleX(1)) Local $iYm = $g_aPath_Points[$iIndex][1] + Int(($aPosSelNew[1] - $aPosSelOrig[1]) / ScaleY(1)) Point_Modify($iIndex, $iXm, $iYm) Points_Update($g_aPath_Points, $g_bClosePath, $g_bShowImage, $iIndex) EndIf EndIf EndFunc ;==>SelSquare_Drag Func SplitDir($FullPath) Local $sDrive, $sDir, $sDummy _PathSplit($FullPath, $sDrive, $sDir, $sDummy, $sDummy) Return $sDrive & $sDir EndFunc ;==>SplitDir Func SplitFileName($FullPath) Local $sDummy, $sFileName, $sExt _PathSplit($FullPath, $sDummy, $sDummy, $sFileName, $sExt) Return $sFileName & "" & $sExt EndFunc ;==>SplitFileName Func State_Cleanup(ByRef $a1) If (UBound($a1) > $g_iUndo_Max + 2) Then Local $a1Rem = _ArrayExtract($a1, 1, Default) $a1Rem[0] = $a1[0] Local $sTmp = $a1[1] If $sTmp <> "" And FileExists($sTmp) Then FileDelete($sTmp) ;ConsoleWrite("Cleanup Delete (UnD) " & $sTmp & @CRLF) EndIf $a1 = $a1Rem EndIf EndFunc ;==>State_Cleanup Func State_Destroy($bRedo_Only = False) Local $sTmp While (UBound($g_asRedo_Files) > 1) $sTmp = _ArrayPop($g_asRedo_Files) If $sTmp <> "" And FileExists($sTmp) Then FileDelete($sTmp) ;ConsoleWrite(", Delete (ReD) " & $sTmp) If @error Then ConsoleWriteError("Failed to Delete " & $sTmp) EndIf WEnd GUICtrlSetState($g_ahCtl1[$eC1_redo], $GUI_DISABLE) If $bRedo_Only Then Return While (UBound($g_asUndo_Files) > 1) $sTmp = _ArrayPop($g_asUndo_Files) If $sTmp <> "" And FileExists($sTmp) Then FileDelete($sTmp) ;ConsoleWrite(", Delete (UnD)" & $sTmp) If @error Then ConsoleWriteError("Failed to Delete " & $sTmp) EndIf WEnd GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_DISABLE) EndFunc ;==>State_Destroy Func State_Is_Diff(ByRef $a1, ByRef $a2) Local $bIsDiff = True If UBound($a1) = UBound($a2) Then $bIsDiff = False For $i = UBound($a1) - 1 To 0 Step -1 If $a1[$i][0] == $a2[$i][0] And $a1[$i][1] == $a2[$i][1] Then ContinueLoop Else ;ConsoleWrite("Diff " & $i & @CRLF) $bIsDiff = True ExitLoop EndIf Next Else ;ConsoleWrite("Diff " & @CRLF) EndIf Return $bIsDiff EndFunc ;==>State_Is_Diff Func State_Restore(ByRef $aPts, $bUndo) Local $sTmp = "" Local $aRes If $bUndo Then $sTmp = _ArrayPop($g_asUndo_Files) ;ConsoleWrite(", Restore (UnD)" & $sTmp) If $sTmp <> "" And FileExists($sTmp) Then _ArrayAdd($g_asRedo_Files, $sTmp) _FileReadToArray($sTmp, $aRes, 0, ",") If UBound($g_asUndo_Files) < 2 Then GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_DISABLE) If UBound($g_asRedo_Files) > 2 Then GUICtrlSetState($g_ahCtl1[$eC1_redo], $GUI_ENABLE) If UBound($g_asUndo_Files) > 1 And Not State_Is_Diff($aPts, $aRes) Then Return State_Restore($aPts, $bUndo) If Not @error Then Return $aRes Else ConsoleWriteError("Failed to Restore " & $sTmp) EndIf Else $sTmp = _ArrayPop($g_asRedo_Files) ;ConsoleWrite(", Restore (ReD) " & $sTmp) If $sTmp <> "" And FileExists($sTmp) Then _ArrayAdd($g_asUndo_Files, $sTmp) _FileReadToArray($sTmp, $aRes, 0, ",") If UBound($g_asRedo_Files) < 2 Then GUICtrlSetState($g_ahCtl1[$eC1_redo], $GUI_DISABLE) If UBound($g_asUndo_Files) > 2 Then GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_ENABLE) If UBound($g_asRedo_Files) > 1 And Not State_Is_Diff($aPts, $aRes) Then Return State_Restore($aPts, $bUndo) If Not @error Then Return $aRes Else ConsoleWriteError("Failed to Restore " & $sTmp) EndIf EndIf Return $aPts EndFunc ;==>State_Restore Func State_Save($aPts) If $g_iUndo_Max < 1 Then Return Local $sTmp = _TempFile(@TempDir, "DPP_") If UBound($g_asRedo_Files) > 2 Then _ArrayAdd($g_asUndo_Files, _ArrayPop($g_asRedo_Files)) _ArrayAdd($g_asUndo_Files, $sTmp) _FileWriteFromArray($sTmp, $aPts, 0, Default, ",") If UBound($g_asRedo_Files) > 2 Then State_Destroy(True) State_Cleanup($g_asUndo_Files) ;ConsoleWrite("Save State " & UBound($g_asUndo_Files) & " " & $sTmp & @CRLF) If UBound($g_asUndo_Files) > 2 Then GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_ENABLE) If @error Then MsgBox(0, @ScriptName & " Error", "Unable to create undo file " & $sTmp) GUICtrlSetState($g_ahCtl1[$eC1_undo], $GUI_DISABLE) GUICtrlSetState($g_ahCtl1[$eC1_redo], $GUI_DISABLE) $g_iUndo_Max = 0 EndIf EndFunc ;==>State_Save Func Tab1_Select() Local $iStateCtl1, $iStateCtl2 Local $tabindex = GUICtrlRead($g_hTab1) ;ConsoleWrite("tab" & $tabindex + 1 & "_selected" & @CRLF) Select Case $tabindex = 0 $iStateCtl1 = $GUI_SHOW $iStateCtl2 = $GUI_HIDE Case $tabindex = 1 $iStateCtl1 = $GUI_HIDE $iStateCtl2 = $GUI_SHOW Case Else _GUICtrlTab_ActivateTab($g_hTab1, 0) Return EndSelect For $i = 0 To UBound($g_ahCtl1) - 1 GUICtrlSetState($g_ahCtl1[$i], $iStateCtl1) Next For $i = 0 To UBound($g_ahCtl2) - 1 GUICtrlSetState($g_ahCtl2[$i], $iStateCtl2) Next EndFunc ;==>Tab1_Select Func WM_NOTIFY($hWnd, $iMsg, $wParam, $lParam) Static Local $hWndList1 = GUICtrlGetHandle($g_hList1) If Not IsHWnd($hWndList1) Then $hWndList1 = GUICtrlGetHandle($g_hList1) If @error Then Return $GUI_RUNDEFMSG If $wParam = $g_hList1 Then Local $tNMHDR = DllStructCreate($tagNMHDR, $lParam) Switch DllStructGetData($tNMHDR, "Code") Case $LVN_KEYDOWN, $NM_CLICK GUICtrlSendToDummy($g_hList1_LVN, $lParam) EndSwitch EndIf Return $GUI_RUNDEFMSG EndFunc ;==>WM_NOTIFY   DrawPathPoints.au3 • Ascer By Ascer Hello Opertation Sys: Win7 x64 Problem: Connecting to webs using TLS 1.1 + Description: WinHttp.WinHttpRequest.5.1 using TLS 1.0 by default, i need higher version to connect into some webs. Dim $oHttp = ObjCreate("WinHTTP.WinHTTPRequest.5.1") $oHttp.open ("GET", "https://howsmyssl.com/a/check", False) $oHttp.Option(9) = 128 ; 128 - TLS 1.0, 512 - TLS 1.1, 2048 - TLS 1.2, 2056 - TLS 1.1 & TLS 1.2 $oHttp.Send ConsoleWrite($oHttp.responseText & @CRLF) ; at end of the respond you can check your TLS version. Mine is: {"tls_version":"TLS 1.0","rating":"Bad"} Error: $oHttp.Option works only with parameter 128 (TLS 1.0) other values make error {Bad parameter} Additional: I've done this tutorial about enabling TLS in registry: <link> Thanks for support. Ascer • islandspapand By islandspapand Hi All i am currently trying to add a function to my project that can send SMS, i have gone with Twilio for the sms service that use a REST API. I have never worked with an API before, and could use some help. I can get my function working with using cURL.exe and copy past command from the website with the following code. And thats great unfortunately i am have issue with character like æøå when sending a SMS appears like a box or ?. this does not happen if i do it from the website so it looks like a Unicode issue in curl.exe. I have done some searching on the forum and understand that i should be able to implement this curl command with the WinHTTP UDF from @trancexx so i don't need a third part exe and it might fix my charater issue. Unfortunately i really don't understand how i am to change curl commands to the WinHTTP and i was hoping some good maybe give me an example i could learn from. Thanks in advanced i have removed the AuthToken number from the script. _SendSMS("00000000","SomeOne","SMS body info") Func _SendSMS($SendTo,$SendFrom,$Msgtxt) $AccountSID = "ACbb765b3180d5938229eff8b8f63ed1bc" $AuthToken = "Auth Token number" $Data = '"https://api.twilio.com/2010-04-01/Accounts/'&$AccountSID&'/Messages.json"'& _ '-X POST \ --data-urlencode "To=+45'&$SendTo&'" \ --data-urlencode "From='&$SendFrom&'" \ --data-urlencode "Body='&$Msgtxt&'" \ -u '&$AccountSID&':'&$AuthToken&'' ShellExecute(@ScriptDir&"\curl.exe","-k "&$Data) ;~ curl 'https://api.twilio.com/2010-04-01/Accounts/ACbb765b3180d5938229eff8b8f63ed1bc/Messages.json' -X POST \ ;~ --data-urlencode 'To=+4500000000' \ ;~ --data-urlencode 'From=Reception' \ ;~ --data-urlencode 'Body=Test Body' \ ;~ -u ACbb765b3180d5938229eff8b8f63ed1bc:[AuthToken] EndFunc     • fenhanxue By fenhanxue i want to copy a picture file to clipboard, so that i can paste the picture through ctrl+V this is my code ,but it dosen't work : #AutoIt3Wrapper_UseX64 = n #Include <Clipboard.au3> #include <GDIPlus.au3> _GDIPlus_Startup() $hClipboard_Bitmap = _GDIPlus_BitmapCreateFromFile('C:\1.jpg') _ClipBoard_Open(0) _ClipBoard_SetDataEx($hClipboard_Bitmap,$CF_BITMAP) _ClipBoard_Close() _GDIPlus_Shutdown() can you help me ×
__label__pos
0.513575
KWindowSystem kx11extras.cpp 1/* 2 This file is part of the KDE libraries 3 SPDX-FileCopyrightText: 1999 Matthias Ettrich <[email protected]> 4 SPDX-FileCopyrightText: 2007 Lubos Lunak <[email protected]> 5 SPDX-FileCopyrightText: 2014 Martin Gräßlin <[email protected]> 6 7 SPDX-License-Identifier: LGPL-2.1-or-later 8*/ 9 10#include "kx11extras.h" 11 12// clang-format off 13#include <kxerrorhandler_p.h> 14#include <fixx11h.h> 15#include <kxutils_p.h> 16// clang-format on 17 18#include "cptr_p.h" 19#include "kwindowsystem.h" 20#include "kwindowsystem_debug.h" 21#include "netwm.h" 22#include "kxcbevent_p.h" 23 24#include <QAbstractNativeEventFilter> 25#include <QGuiApplication> 26#include <QMetaMethod> 27#include <QRect> 28#include <QScreen> 29#include <private/qtx11extras_p.h> 30 31#include <X11/Xatom.h> 32#include <X11/Xutil.h> 33#include <X11/extensions/Xfixes.h> 34#include <xcb/xcb.h> 35#include <xcb/xfixes.h> 36 37// QPoint and QSize all have handy / operators which are useful for scaling, positions and sizes for high DPI support 38// QRect does not, so we create one for internal purposes within this class 39inline QRect operator/(const QRect &rectangle, qreal factor) 40{ 41 return QRect(rectangle.topLeft() / factor, rectangle.size() / factor); 42} 43 44class MainThreadInstantiator : public QObject 45{ 47 48public: 49 MainThreadInstantiator(KX11Extras::FilterInfo _what); 50 Q_INVOKABLE NETEventFilter *createNETEventFilter(); 51 52private: 53 KX11Extras::FilterInfo m_what; 54}; 55 56class NETEventFilter : public NETRootInfo, public QAbstractNativeEventFilter 57{ 58public: 59 NETEventFilter(KX11Extras::FilterInfo _what); 60 ~NETEventFilter() override; 61 void activate(); 62 QList<WId> windows; 63 QList<WId> stackingOrder; 64 65 struct StrutData { 66 StrutData(WId window_, const NETStrut &strut_, int desktop_) 67 : window(window_) 68 , strut(strut_) 69 , desktop(desktop_) 70 { 71 } 72 WId window; 73 NETStrut strut; 74 int desktop; 75 }; 76 QList<StrutData> strutWindows; 77 QList<WId> possibleStrutWindows; 78 bool strutSignalConnected; 79 bool compositingEnabled; 80 bool haveXfixes; 81 KX11Extras::FilterInfo what; 82 int xfixesEventBase; 83 bool mapViewport(); 84 85 bool nativeEventFilter(const QByteArray &eventType, void *message, qintptr *) override; 86 87 void updateStackingOrder(); 88 bool removeStrutWindow(WId); 89 90protected: 91 void addClient(xcb_window_t) override; 92 void removeClient(xcb_window_t) override; 93 94private: 95 bool nativeEventFilter(xcb_generic_event_t *event); 96 xcb_window_t winId; 97 xcb_window_t m_appRootWindow; 98}; 99 100static Atom net_wm_cm; 101static void create_atoms(); 102 103static inline const QRect &displayGeometry() 104{ 105 static QRect displayGeometry; 106 static bool isDirty = true; 107 108 if (isDirty) { 109 static QList<QMetaObject::Connection> connections; 110 auto dirtify = [&] { 111 isDirty = true; 112 for (const QMetaObject::Connection &con : std::as_const(connections)) { 114 } 115 connections.clear(); 116 }; 117 120 const QList<QScreen *> screenList = QGuiApplication::screens(); 121 QRegion region; 122 for (int i = 0; i < screenList.count(); ++i) { 123 const QScreen *screen = screenList.at(i); 124 connections << QObject::connect(screen, &QScreen::geometryChanged, dirtify); 125 const QRect geometry = screen->geometry(); 126 const qreal dpr = screen->devicePixelRatio(); 127 region += QRect(geometry.topLeft(), geometry.size() * dpr); 128 } 129 displayGeometry = region.boundingRect(); 130 isDirty = false; 131 } 132 133 return displayGeometry; 134} 135 136static inline int displayWidth() 137{ 138 return displayGeometry().width(); 139} 140 141static inline int displayHeight() 142{ 143 return displayGeometry().height(); 144} 145 146// clang-format off 147static const NET::Properties windowsProperties = NET::ClientList | NET::ClientListStacking | 148 NET::Supported | 149 NET::NumberOfDesktops | 150 NET::DesktopGeometry | 151 NET::DesktopViewport | 152 NET::CurrentDesktop | 153 NET::DesktopNames | 154 NET::ActiveWindow | 155 NET::WorkArea; 156static const NET::Properties2 windowsProperties2 = NET::WM2ShowingDesktop; 157 158// ClientList and ClientListStacking is not per-window information, but a desktop information, 159// so track it even with only INFO_BASIC 160static const NET::Properties desktopProperties = NET::ClientList | NET::ClientListStacking | 161 NET::Supported | 162 NET::NumberOfDesktops | 163 NET::DesktopGeometry | 164 NET::DesktopViewport | 165 NET::CurrentDesktop | 166 NET::DesktopNames | 167 NET::ActiveWindow | 168 NET::WorkArea; 169static const NET::Properties2 desktopProperties2 = NET::WM2ShowingDesktop; 170// clang-format on 171 172MainThreadInstantiator::MainThreadInstantiator(KX11Extras::FilterInfo _what) 173 : QObject() 174 , m_what(_what) 175{ 176} 177 178NETEventFilter *MainThreadInstantiator::createNETEventFilter() 179{ 180 return new NETEventFilter(m_what); 181} 182 183NETEventFilter::NETEventFilter(KX11Extras::FilterInfo _what) 184 : NETRootInfo(QX11Info::connection(), 185 _what >= KX11Extras::INFO_WINDOWS ? windowsProperties : desktopProperties, 186 _what >= KX11Extras::INFO_WINDOWS ? windowsProperties2 : desktopProperties2, 187 QX11Info::appScreen(), 188 false) 190 , strutSignalConnected(false) 191 , compositingEnabled(false) 192 , haveXfixes(false) 193 , what(_what) 194 , winId(XCB_WINDOW_NONE) 195 , m_appRootWindow(QX11Info::appRootWindow()) 196{ 198 199 int errorBase; 200 if ((haveXfixes = XFixesQueryExtension(QX11Info::display(), &xfixesEventBase, &errorBase))) { 201 create_atoms(); 202 winId = xcb_generate_id(QX11Info::connection()); 203 uint32_t values[] = {true, XCB_EVENT_MASK_PROPERTY_CHANGE | XCB_EVENT_MASK_STRUCTURE_NOTIFY}; 204 xcb_create_window(QX11Info::connection(), 205 XCB_COPY_FROM_PARENT, 206 winId, 207 m_appRootWindow, 208 0, 209 0, 210 1, 211 1, 212 0, 213 XCB_WINDOW_CLASS_INPUT_ONLY, 214 XCB_COPY_FROM_PARENT, 215 XCB_CW_OVERRIDE_REDIRECT | XCB_CW_EVENT_MASK, 216 values); 217 XFixesSelectSelectionInput(QX11Info::display(), 218 winId, 219 net_wm_cm, 220 XFixesSetSelectionOwnerNotifyMask | XFixesSelectionWindowDestroyNotifyMask | XFixesSelectionClientCloseNotifyMask); 221 compositingEnabled = XGetSelectionOwner(QX11Info::display(), net_wm_cm) != None; 222 } 223} 224 225NETEventFilter::~NETEventFilter() 226{ 227 if (QX11Info::connection() && winId != XCB_WINDOW_NONE) { 228 xcb_destroy_window(QX11Info::connection(), winId); 229 winId = XCB_WINDOW_NONE; 230 } 231} 232 233// not virtual, but it's called directly only from init() 234void NETEventFilter::activate() 235{ 237 updateStackingOrder(); 238} 239 240bool NETEventFilter::nativeEventFilter(const QByteArray &eventType, void *message, qintptr *) 241{ 242 if (eventType != "xcb_generic_event_t") { 243 // only interested in XCB events of course 244 return false; 245 } 246 return nativeEventFilter(reinterpret_cast<xcb_generic_event_t *>(message)); 247} 248 249bool NETEventFilter::nativeEventFilter(xcb_generic_event_t *ev) 250{ 252 const uint8_t eventType = ev->response_type & ~0x80; 253 254 if (eventType == xfixesEventBase + XCB_XFIXES_SELECTION_NOTIFY) { 255 xcb_xfixes_selection_notify_event_t *event = reinterpret_cast<xcb_xfixes_selection_notify_event_t *>(ev); 256 if (event->window == winId) { 257 bool haveOwner = event->owner != XCB_WINDOW_NONE; 258 if (compositingEnabled != haveOwner) { 259 compositingEnabled = haveOwner; 260 Q_EMIT KX11Extras::self()->compositingChanged(compositingEnabled); 261 } 262 return true; 263 } 264 // Qt compresses XFixesSelectionNotifyEvents without caring about the actual window 265 // gui/kernel/qapplication_x11.cpp 266 // until that can be assumed fixed, we also react on events on the root (caused by Qts own compositing tracker) 267 if (event->window == m_appRootWindow) { 268 if (event->selection == net_wm_cm) { 269 bool haveOwner = event->owner != XCB_WINDOW_NONE; 270 if (compositingEnabled != haveOwner) { 271 compositingEnabled = haveOwner; 272 Q_EMIT KX11Extras::self()->compositingChanged(compositingEnabled); 273 } 274 // NOTICE this is not our event, we just randomly captured it from Qt -> pass on 275 return false; 276 } 277 } 278 return false; 279 } 280 281 xcb_window_t eventWindow = XCB_WINDOW_NONE; 282 switch (eventType) { 283 case XCB_CLIENT_MESSAGE: 284 eventWindow = reinterpret_cast<xcb_client_message_event_t *>(ev)->window; 285 break; 286 case XCB_PROPERTY_NOTIFY: 287 eventWindow = reinterpret_cast<xcb_property_notify_event_t *>(ev)->window; 288 break; 289 case XCB_CONFIGURE_NOTIFY: 290 eventWindow = reinterpret_cast<xcb_configure_notify_event_t *>(ev)->window; 291 break; 292 } 293 294 if (eventWindow == m_appRootWindow) { 295 int old_current_desktop = currentDesktop(); 296 xcb_window_t old_active_window = activeWindow(); 297 int old_number_of_desktops = numberOfDesktops(); 298 bool old_showing_desktop = showingDesktop(); 299 NET::Properties props; 300 NET::Properties2 props2; 301 NETRootInfo::event(ev, &props, &props2); 302 303 if ((props & CurrentDesktop) && currentDesktop() != old_current_desktop) { 304 Q_EMIT KX11Extras::self()->currentDesktopChanged(currentDesktop()); 305 } 306 if ((props & DesktopViewport) && mapViewport() && currentDesktop() != old_current_desktop) { 307 Q_EMIT KX11Extras::self()->currentDesktopChanged(currentDesktop()); 308 } 309 if ((props & ActiveWindow) && activeWindow() != old_active_window) { 310 Q_EMIT KX11Extras::self()->activeWindowChanged(activeWindow()); 311 } 312 if (props & DesktopNames) { 313 Q_EMIT KX11Extras::self()->desktopNamesChanged(); 314 } 315 if ((props & NumberOfDesktops) && numberOfDesktops() != old_number_of_desktops) { 316 Q_EMIT KX11Extras::self()->numberOfDesktopsChanged(numberOfDesktops()); 317 } 318 if ((props & DesktopGeometry) && mapViewport() && numberOfDesktops() != old_number_of_desktops) { 319 Q_EMIT KX11Extras::self()->numberOfDesktopsChanged(numberOfDesktops()); 320 } 321 if (props & WorkArea) { 322 Q_EMIT KX11Extras::self()->workAreaChanged(); 323 } 324 if (props & ClientListStacking) { 325 updateStackingOrder(); 326 Q_EMIT KX11Extras::self()->stackingOrderChanged(); 327 } 328 if ((props2 & WM2ShowingDesktop) && showingDesktop() != old_showing_desktop) { 330 } 331 } else if (windows.contains(eventWindow)) { 332 NETWinInfo ni(QX11Info::connection(), eventWindow, m_appRootWindow, NET::Properties(), NET::Properties2()); 333 NET::Properties dirtyProperties; 334 NET::Properties2 dirtyProperties2; 335 ni.event(ev, &dirtyProperties, &dirtyProperties2); 336 if (eventType == XCB_PROPERTY_NOTIFY) { 337 xcb_property_notify_event_t *event = reinterpret_cast<xcb_property_notify_event_t *>(ev); 338 if (event->atom == XCB_ATOM_WM_HINTS) { 339 dirtyProperties |= NET::WMIcon; // support for old icons 340 } else if (event->atom == XCB_ATOM_WM_NAME) { 341 dirtyProperties |= NET::WMName; // support for old name 342 } else if (event->atom == XCB_ATOM_WM_ICON_NAME) { 343 dirtyProperties |= NET::WMIconName; // support for old iconic name 344 } 345 } 346 if (mapViewport() && (dirtyProperties & (NET::WMState | NET::WMGeometry))) { 347 /* geometry change -> possible viewport change 348 * state change -> possible NET::Sticky change 349 */ 350 dirtyProperties |= NET::WMDesktop; 351 } 352 if ((dirtyProperties & NET::WMStrut) != 0) { 353 removeStrutWindow(eventWindow); 354 if (!possibleStrutWindows.contains(eventWindow)) { 355 possibleStrutWindows.append(eventWindow); 356 } 357 } 358 if (dirtyProperties || dirtyProperties2) { 359 Q_EMIT KX11Extras::self()->windowChanged(eventWindow, dirtyProperties, dirtyProperties2); 360 361 if ((dirtyProperties & NET::WMStrut) != 0) { 362 Q_EMIT KX11Extras::self()->strutChanged(); 363 } 364 } 365 } 366 367 return false; 368} 369 370bool NETEventFilter::removeStrutWindow(WId w) 371{ 372 for (QList<StrutData>::Iterator it = strutWindows.begin(); it != strutWindows.end(); ++it) { 373 if ((*it).window == w) { 374 strutWindows.erase(it); 375 return true; 376 } 377 } 378 return false; 379} 380 381void NETEventFilter::updateStackingOrder() 382{ 383 stackingOrder.clear(); 384 for (int i = 0; i < clientListStackingCount(); i++) { 385 stackingOrder.append(clientListStacking()[i]); 386 } 387} 388 389void NETEventFilter::addClient(xcb_window_t w) 390{ 391 if ((what >= KX11Extras::INFO_WINDOWS)) { 392 xcb_connection_t *c = QX11Info::connection(); 393 UniqueCPointer<xcb_get_window_attributes_reply_t> attr(xcb_get_window_attributes_reply(c, xcb_get_window_attributes_unchecked(c, w), nullptr)); 394 395 uint32_t events = XCB_EVENT_MASK_PROPERTY_CHANGE | XCB_EVENT_MASK_STRUCTURE_NOTIFY; 396 if (attr) { 397 events = events | attr->your_event_mask; 398 } 399 xcb_change_window_attributes(c, w, XCB_CW_EVENT_MASK, &events); 400 } 401 402 bool emit_strutChanged = false; 403 404 if (strutSignalConnected) { 405 NETWinInfo info(QX11Info::connection(), w, QX11Info::appRootWindow(), NET::WMStrut | NET::WMDesktop, NET::Properties2()); 406 NETStrut strut = info.strut(); 407 if (strut.left || strut.top || strut.right || strut.bottom) { 408 strutWindows.append(StrutData(w, strut, info.desktop())); 409 emit_strutChanged = true; 410 } 411 } else { 412 possibleStrutWindows.append(w); 413 } 414 415 windows.append(w); 416 Q_EMIT KX11Extras::self()->windowAdded(w); 417 if (emit_strutChanged) { 418 Q_EMIT KX11Extras::self()->strutChanged(); 419 } 420} 421 422void NETEventFilter::removeClient(xcb_window_t w) 423{ 424 bool emit_strutChanged = removeStrutWindow(w); 425 if (strutSignalConnected && possibleStrutWindows.contains(w)) { 426 NETWinInfo info(QX11Info::connection(), w, QX11Info::appRootWindow(), NET::WMStrut, NET::Properties2()); 427 NETStrut strut = info.strut(); 428 if (strut.left || strut.top || strut.right || strut.bottom) { 429 emit_strutChanged = true; 430 } 431 } 432 433 possibleStrutWindows.removeAll(w); 434 windows.removeAll(w); 435 Q_EMIT KX11Extras::self()->windowRemoved(w); 436 if (emit_strutChanged) { 437 Q_EMIT KX11Extras::self()->strutChanged(); 438 } 439} 440 441bool NETEventFilter::mapViewport() 442{ 443 // compiz claims support even though it doesn't use virtual desktops :( 444 // if( isSupported( NET::DesktopViewport ) && !isSupported( NET::NumberOfDesktops )) 445 446 // this test is duplicated in KWindowSystem::mapViewport() 447 if (isSupported(NET::DesktopViewport) && numberOfDesktops(true) <= 1 448 && (desktopGeometry().width > displayWidth() || desktopGeometry().height > displayHeight())) { 449 return true; 450 } 451 return false; 452} 453 454static bool atoms_created = false; 455 456static Atom _wm_protocols; 457static Atom _wm_change_state; 458static Atom kwm_utf8_string; 459 460static void create_atoms() 461{ 462 if (!atoms_created) { 463 const int max = 20; 464 Atom *atoms[max]; 465 const char *names[max]; 466 Atom atoms_return[max]; 467 int n = 0; 468 469 atoms[n] = &_wm_protocols; 470 names[n++] = "WM_PROTOCOLS"; 471 472 atoms[n] = &_wm_change_state; 473 names[n++] = "WM_CHANGE_STATE"; 474 475 atoms[n] = &kwm_utf8_string; 476 names[n++] = "UTF8_STRING"; 477 478 char net_wm_cm_name[100]; 479 sprintf(net_wm_cm_name, "_NET_WM_CM_S%d", QX11Info::appScreen()); 480 atoms[n] = &net_wm_cm; 481 names[n++] = net_wm_cm_name; 482 483 // we need a const_cast for the shitty X API 484 XInternAtoms(QX11Info::display(), const_cast<char **>(names), n, false, atoms_return); 485 for (int i = 0; i < n; i++) { 486 *atoms[i] = atoms_return[i]; 487 } 488 489 atoms_created = True; 490 } 491} 492 493#define CHECK_X11 \ 494 if (!KWindowSystem::isPlatformX11()) { \ 495 qCWarning(LOG_KWINDOWSYSTEM) << Q_FUNC_INFO << "may only be used on X11"; \ 496 return {}; \ 497 } 498 499#define CHECK_X11_VOID \ 500 if (!KWindowSystem::isPlatformX11()) { \ 501 qCWarning(LOG_KWINDOWSYSTEM) << Q_FUNC_INFO << "may only be used on X11"; \ 502 return; \ 503 } 504 505// WARNING 506// you have to call s_d_func() again after calling this function if you want a valid pointer! 507void KX11Extras::init(FilterInfo what) 508{ 509 NETEventFilter *const s_d = s_d_func(); 510 511 if (what >= INFO_WINDOWS) { 512 what = INFO_WINDOWS; 513 } else { 514 what = INFO_BASIC; 515 } 516 517 if (!s_d || s_d->what < what) { 518 const bool wasCompositing = s_d ? s_d->compositingEnabled : false; 519 MainThreadInstantiator instantiator(what); 520 NETEventFilter *filter; 521 if (instantiator.thread() == QCoreApplication::instance()->thread()) { 522 filter = instantiator.createNETEventFilter(); 523 } else { 524 // the instantiator is not in the main app thread, which implies 525 // we are being called in a thread that is not the main app thread 526 // so we move the instantiator to the main app thread and invoke 527 // the method with a blocking call 528 instantiator.moveToThread(QCoreApplication::instance()->thread()); 529 QMetaObject::invokeMethod(&instantiator, "createNETEventFilter", Qt::BlockingQueuedConnection, Q_RETURN_ARG(NETEventFilter *, filter)); 530 } 531 d.reset(filter); 532 d->activate(); 533 if (wasCompositing != s_d_func()->compositingEnabled) { 534 Q_EMIT KX11Extras::self()->compositingChanged(s_d_func()->compositingEnabled); 535 } 536 } 537} 538 539KX11Extras *KX11Extras::self() 540{ 541 static KX11Extras instance; 542 return &instance; 543} 544 546{ 547 CHECK_X11 548 KX11Extras::self()->init(INFO_BASIC); 549 return KX11Extras::self()->s_d_func()->windows; 550} 551 553{ 554 CHECK_X11 555 return windows().contains(w); 556} 557 559{ 560 CHECK_X11 561 KX11Extras::self()->init(INFO_BASIC); 562 return KX11Extras::self()->s_d_func()->stackingOrder; 563} 564 566{ 567 CHECK_X11 568 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 569 if (s_d) { 570 return s_d->activeWindow(); 571 } 572 NETRootInfo info(QX11Info::connection(), NET::ActiveWindow, NET::Properties2(), QX11Info::appScreen()); 573 return info.activeWindow(); 574} 575 576void KX11Extras::activateWindow(WId win, long time) 577{ 578 CHECK_X11_VOID 579 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::Properties2(), QX11Info::appScreen()); 580 if (time == 0) { 581 time = QX11Info::appUserTime(); 582 } 583 info.setActiveWindow(win, NET::FromApplication, time, QGuiApplication::focusWindow() ? QGuiApplication::focusWindow()->winId() : 0); 584} 585 586void KX11Extras::forceActiveWindow(WId win, long time) 587{ 588 CHECK_X11_VOID 589 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::Properties2(), QX11Info::appScreen()); 590 if (time == 0) { 591 time = QX11Info::appTime(); 592 } 593 info.setActiveWindow(win, NET::FromTool, time, 0); 594} 595 597{ 598 CHECK_X11_VOID 599 forceActiveWindow(win->winId(), time); 600} 601 603{ 604 CHECK_X11 605 KX11Extras::self()->init(INFO_BASIC); 606 if (KX11Extras::self()->s_d_func()->haveXfixes) { 607 return KX11Extras::self()->s_d_func()->compositingEnabled; 608 } else { 609 create_atoms(); 610 return XGetSelectionOwner(QX11Info::display(), net_wm_cm); 611 } 612} 613 615{ 616 CHECK_X11 617 if (!QX11Info::connection()) { 618 return 1; 619 } 620 621 if (mapViewport()) { 622 KX11Extras::self()->init(INFO_BASIC); 623 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 624 NETPoint p = s_d->desktopViewport(s_d->currentDesktop(true)); 625 return KX11Extras::self()->viewportToDesktop(QPoint(p.x, p.y) / qApp->devicePixelRatio()); 626 } 627 628 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 629 if (s_d) { 630 return s_d->currentDesktop(true); 631 } 632 NETRootInfo info(QX11Info::connection(), NET::CurrentDesktop, NET::Properties2(), QX11Info::appScreen()); 633 return info.currentDesktop(true); 634} 635 637{ 638 CHECK_X11 639 if (!QX11Info::connection()) { 640 return 1; 641 } 642 643 if (mapViewport()) { 644 KX11Extras::self()->init(INFO_BASIC); 645 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 646 NETSize s = s_d->desktopGeometry(); 647 return s.width / displayWidth() * s.height / displayHeight(); 648 } 649 650 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 651 if (s_d) { 652 return s_d->numberOfDesktops(true); 653 } 654 NETRootInfo info(QX11Info::connection(), NET::NumberOfDesktops, NET::Properties2(), QX11Info::appScreen()); 655 return info.numberOfDesktops(true); 656} 657 659{ 660 CHECK_X11_VOID 661 if (mapViewport()) { 662 KX11Extras::self()->init(INFO_BASIC); 663 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 664 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::Properties2(), QX11Info::appScreen()); 665 QPoint pos = KX11Extras::self()->desktopToViewport(desktop, true); 666 NETPoint p; 667 p.x = pos.x(); 668 p.y = pos.y(); 669 info.setDesktopViewport(s_d->currentDesktop(true), p); 670 return; 671 } 672 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::Properties2(), QX11Info::appScreen()); 673 info.setCurrentDesktop(desktop, true); 674} 675 676void KX11Extras::setOnAllDesktops(WId win, bool b) 677{ 678 CHECK_X11_VOID 679 if (mapViewport()) { 680 if (b) { 681 setState(win, NET::Sticky); 682 } else { 684 } 685 return; 686 } 687 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMDesktop, NET::Properties2()); 688 if (b) { 690 } else if (info.desktop(true) == NETWinInfo::OnAllDesktops) { 691 NETRootInfo rinfo(QX11Info::connection(), NET::CurrentDesktop, NET::Properties2(), QX11Info::appScreen()); 692 info.setDesktop(rinfo.currentDesktop(true), true); 693 } 694} 695 696void KX11Extras::setOnDesktop(WId win, int desktop) 697{ 698 CHECK_X11_VOID 699 if (mapViewport()) { 700 if (desktop == NET::OnAllDesktops) { 701 return setOnAllDesktops(win, true); 702 } else { 704 } 705 KX11Extras::self()->init(INFO_BASIC); 706 QPoint p = KX11Extras::self()->desktopToViewport(desktop, false); 707 Window dummy; 708 int x; 709 int y; 710 unsigned int w; 711 unsigned int h; 712 unsigned int b; 713 unsigned int dp; 714 XGetGeometry(QX11Info::display(), win, &dummy, &x, &y, &w, &h, &b, &dp); 715 // get global position 716 XTranslateCoordinates(QX11Info::display(), win, QX11Info::appRootWindow(), 0, 0, &x, &y, &dummy); 717 x += w / 2; // center 718 y += h / 2; 719 // transform to coordinates on the current "desktop" 720 x = x % displayWidth(); 721 y = y % displayHeight(); 722 if (x < 0) { 723 x = x + displayWidth(); 724 } 725 if (y < 0) { 726 y = y + displayHeight(); 727 } 728 x += p.x(); // move to given "desktop" 729 y += p.y(); 730 x -= w / 2; // from center back to topleft 731 y -= h / 2; 732 p = KX11Extras::self()->constrainViewportRelativePosition(QPoint(x, y)); 733 int flags = (NET::FromTool << 12) | (0x03 << 8) | 10; // from tool(?), x/y, static gravity 734 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 735 s_d->moveResizeWindowRequest(win, flags, p.x(), p.y(), w, h); 736 return; 737 } 738 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMDesktop, NET::Properties2()); 739 info.setDesktop(desktop, true); 740} 741 742void KX11Extras::setOnActivities(WId win, const QStringList &activities) 743{ 744 CHECK_X11_VOID 745 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::Properties(), NET::WM2Activities); 746 info.setActivities(activities.join(QLatin1Char(',')).toLatin1().constData()); 747} 748 749QPixmap KX11Extras::icon(WId win, int width, int height, bool scale) 750{ 751 CHECK_X11 752 return icon(win, width, height, scale, NETWM | WMHints | ClassHint | XApp); 753} 754 755QPixmap iconFromNetWinInfo(int width, int height, bool scale, int flags, NETWinInfo *info) 756{ 757 QPixmap result; 758 if (!info) { 759 return result; 760 } 761 if (flags & KX11Extras::NETWM) { 762 NETIcon ni = info->icon(width, height); 763 if (ni.data && ni.size.width > 0 && ni.size.height > 0) { 764 QImage img((uchar *)ni.data, (int)ni.size.width, (int)ni.size.height, QImage::Format_ARGB32); 765 if (scale && width > 0 && height > 0 && img.size() != QSize(width, height) && !img.isNull()) { 766 img = img.scaled(width, height, Qt::IgnoreAspectRatio, Qt::SmoothTransformation); 767 } 768 if (!img.isNull()) { 769 result = QPixmap::fromImage(img); 770 } 771 return result; 772 } 773 } 774 775 if (flags & KX11Extras::WMHints) { 776 xcb_pixmap_t p = info->icccmIconPixmap(); 777 xcb_pixmap_t p_mask = info->icccmIconPixmapMask(); 778 779 if (p != XCB_PIXMAP_NONE) { 780 QPixmap pm = KXUtils::createPixmapFromHandle(info->xcbConnection(), p, p_mask); 781 if (scale && width > 0 && height > 0 && !pm.isNull() // 782 && (pm.width() != width || pm.height() != height)) { 784 } else { 785 result = pm; 786 } 787 } 788 } 789 790 // Since width can be any arbitrary size, but the icons cannot, 791 // take the nearest value for best results (ignoring 22 pixel 792 // icons as they don't exist for apps): 793 int iconWidth; 794 if (width < 24) { 795 iconWidth = 16; 796 } else if (width < 40) { 797 iconWidth = 32; 798 } else if (width < 56) { 799 iconWidth = 48; 800 } else if (width < 96) { 801 iconWidth = 64; 802 } else if (width < 192) { 803 iconWidth = 128; 804 } else { 805 iconWidth = 256; 806 } 807 808 if (flags & KX11Extras::ClassHint) { 809 // Try to load the icon from the classhint if the app didn't specify 810 // its own: 811 if (result.isNull()) { 813 const QPixmap pm = icon.isNull() ? QPixmap() : icon.pixmap(iconWidth, iconWidth); 814 if (scale && !pm.isNull()) { 815 result = QPixmap::fromImage(pm.toImage().scaled(width, height, Qt::IgnoreAspectRatio, Qt::SmoothTransformation)); 816 } else { 817 result = pm; 818 } 819 } 820 } 821 822 if (flags & KX11Extras::XApp) { 823 // If the icon is still a null pixmap, load the icon for X applications 824 // as a last resort: 825 if (result.isNull()) { 826 const QIcon icon = QIcon::fromTheme(QStringLiteral("xorg")); 827 const QPixmap pm = icon.isNull() ? QPixmap() : icon.pixmap(iconWidth, iconWidth); 828 if (scale && !pm.isNull()) { 829 result = QPixmap::fromImage(pm.toImage().scaled(width, height, Qt::IgnoreAspectRatio, Qt::SmoothTransformation)); 830 } else { 831 result = pm; 832 } 833 } 834 } 835 return result; 836} 837 838QPixmap KX11Extras::icon(WId win, int width, int height, bool scale, int flags) 839{ 840 CHECK_X11 841 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMIcon, NET::WM2WindowClass | NET::WM2IconPixmap); 842 return iconFromNetWinInfo(width, height, scale, flags, &info); 843} 844 845QPixmap KX11Extras::icon(WId win, int width, int height, bool scale, int flags, NETWinInfo *info) 846{ 847 // No CHECK_X11 here, kwin_wayland calls this to get the icon for XWayland windows 848 width *= qGuiApp->devicePixelRatio(); 849 height *= qGuiApp->devicePixelRatio(); 850 851 if (info) { 852 return iconFromNetWinInfo(width, height, scale, flags, info); 853 } 854 CHECK_X11 855 856 NETWinInfo newInfo(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMIcon, NET::WM2WindowClass | NET::WM2IconPixmap); 857 858 return iconFromNetWinInfo(width, height, scale, flags, &newInfo); 859} 860 861// enum values for ICCCM 4.1.2.4 and 4.1.4, defined to not depend on xcb-icccm 862enum { 863 _ICCCM_WM_STATE_WITHDRAWN = 0, 864 _ICCCM_WM_STATE_NORMAL = 1, 865 _ICCCM_WM_STATE_ICONIC = 3, 866}; 867 869{ 870 CHECK_X11_VOID 871 create_atoms(); 872 // as described in ICCCM 4.1.4 873 KXcbEvent<xcb_client_message_event_t> ev; 874 ev.response_type = XCB_CLIENT_MESSAGE; 875 ev.window = win; 876 ev.type = _wm_change_state; 877 ev.format = 32; 878 ev.data.data32[0] = _ICCCM_WM_STATE_ICONIC; 879 ev.data.data32[1] = 0; 880 ev.data.data32[2] = 0; 881 ev.data.data32[3] = 0; 882 ev.data.data32[4] = 0; 883 xcb_send_event(QX11Info::connection(), 884 false, 885 QX11Info::appRootWindow(), 886 XCB_EVENT_MASK_SUBSTRUCTURE_NOTIFY | XCB_EVENT_MASK_SUBSTRUCTURE_REDIRECT, 887 ev.buffer()); 888} 889 891{ 892 CHECK_X11_VOID 893 xcb_map_window(QX11Info::connection(), win); 894} 895 897{ 898 CHECK_X11 899 KX11Extras::self()->init(INFO_BASIC); 900 int desk = (desktop > 0 && desktop <= (int)KX11Extras::self()->s_d_func()->numberOfDesktops()) ? desktop : currentDesktop(); 901 if (desk <= 0) { 902 return displayGeometry() / qApp->devicePixelRatio(); 903 } 904 905 NETRect r = KX11Extras::self()->s_d_func()->workArea(desk); 906 if (r.size.width <= 0 || r.size.height <= 0) { // not set 907 return displayGeometry() / qApp->devicePixelRatio(); 908 } 909 910 return QRect(r.pos.x, r.pos.y, r.size.width, r.size.height) / qApp->devicePixelRatio(); 911} 912 913QRect KX11Extras::workArea(const QList<WId> &exclude, int desktop) 914{ 915 CHECK_X11 916 KX11Extras::self()->init(INFO_WINDOWS); // invalidates s_d_func's return value 917 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 918 919 QRect all = displayGeometry(); 920 QRect a = all; 921 922 if (desktop == -1) { 923 desktop = s_d->currentDesktop(); 924 } 925 927 for (it1 = s_d->windows.constBegin(); it1 != s_d->windows.constEnd(); ++it1) { 928 if (exclude.contains(*it1)) { 929 continue; 930 } 931 932 // Kicker (very) extensively calls this function, causing hundreds of roundtrips just 933 // to repeatedly find out struts of all windows. Therefore strut values for strut 934 // windows are cached here. 935 NETStrut strut; 936 auto it2 = s_d->strutWindows.begin(); 937 for (; it2 != s_d->strutWindows.end(); ++it2) { 938 if ((*it2).window == *it1) { 939 break; 940 } 941 } 942 943 if (it2 != s_d->strutWindows.end()) { 944 if (!((*it2).desktop == desktop || (*it2).desktop == NETWinInfo::OnAllDesktops)) { 945 continue; 946 } 947 948 strut = (*it2).strut; 949 } else if (s_d->possibleStrutWindows.contains(*it1)) { 950 NETWinInfo info(QX11Info::connection(), (*it1), QX11Info::appRootWindow(), NET::WMStrut | NET::WMDesktop, NET::Properties2()); 951 strut = info.strut(); 952 s_d->possibleStrutWindows.removeAll(*it1); 953 s_d->strutWindows.append(NETEventFilter::StrutData(*it1, info.strut(), info.desktop())); 954 955 if (!(info.desktop() == desktop || info.desktop() == NETWinInfo::OnAllDesktops)) { 956 continue; 957 } 958 } else { 959 continue; // not a strut window 960 } 961 962 QRect r = all; 963 if (strut.left > 0) { 964 r.setLeft(r.left() + (int)strut.left); 965 } 966 if (strut.top > 0) { 967 r.setTop(r.top() + (int)strut.top); 968 } 969 if (strut.right > 0) { 970 r.setRight(r.right() - (int)strut.right); 971 } 972 if (strut.bottom > 0) { 973 r.setBottom(r.bottom() - (int)strut.bottom); 974 } 975 976 a = a.intersected(r); 977 } 978 return a / qApp->devicePixelRatio(); 979} 980 982{ 983 CHECK_X11 984 KX11Extras::self()->init(INFO_BASIC); 985 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 986 987 bool isDesktopSane = (desktop > 0 && desktop <= (int)s_d->numberOfDesktops()); 988 const char *name = s_d->desktopName(isDesktopSane ? desktop : currentDesktop()); 989 990 if (name && name[0]) { 991 return QString::fromUtf8(name); 992 } 993 994 return KWindowSystem::tr("Desktop %1").arg(desktop); 995} 996 997void KX11Extras::setDesktopName(int desktop, const QString &name) 998{ 999 CHECK_X11_VOID 1000 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 1001 1002 if (desktop <= 0 || desktop > (int)numberOfDesktops()) { 1003 desktop = currentDesktop(); 1004 } 1005 1006 if (s_d) { 1007 s_d->setDesktopName(desktop, name.toUtf8().constData()); 1008 return; 1009 } 1010 1011 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::Properties2(), QX11Info::appScreen()); 1012 info.setDesktopName(desktop, name.toUtf8().constData()); 1013} 1014 1015QString KX11Extras::readNameProperty(WId win, unsigned long atom) 1016{ 1017 CHECK_X11 1018 XTextProperty tp; 1019 char **text = nullptr; 1020 int count; 1021 QString result; 1022 if (XGetTextProperty(QX11Info::display(), win, &tp, atom) != 0 && tp.value != nullptr) { 1023 create_atoms(); 1024 1025 if (tp.encoding == kwm_utf8_string) { 1026 result = QString::fromUtf8((const char *)tp.value); 1027 } else if (XmbTextPropertyToTextList(QX11Info::display(), &tp, &text, &count) == Success && text != nullptr && count > 0) { 1028 result = QString::fromLocal8Bit(text[0]); 1029 } else if (tp.encoding == XA_STRING) { 1030 result = QString::fromLocal8Bit((const char *)tp.value); 1031 } 1032 if (text != nullptr) { 1033 XFreeStringList(text); 1034 } 1035 XFree(tp.value); 1036 } 1037 return result; 1038} 1039 1041{ 1042 CHECK_X11 1043 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 1044 if (s_d) { 1045 return s_d->mapViewport(); 1046 } 1047 1048 // Handle case of not having a QGuiApplication 1049 if (!QX11Info::connection()) { 1050 return false; 1051 } 1052 1053 // avoid creating KWindowSystemPrivate 1054 NETRootInfo infos(QX11Info::connection(), NET::Supported, NET::Properties2(), QX11Info::appScreen()); 1055 if (!infos.isSupported(NET::DesktopViewport)) { 1056 return false; 1057 } 1058 NETRootInfo info(QX11Info::connection(), NET::NumberOfDesktops | NET::CurrentDesktop | NET::DesktopGeometry, NET::Properties2(), QX11Info::appScreen()); 1059 if (info.numberOfDesktops(true) <= 1 && (info.desktopGeometry().width > displayWidth() || info.desktopGeometry().height > displayHeight())) { 1060 return true; 1061 } 1062 return false; 1063} 1064 1065int KX11Extras::viewportWindowToDesktop(const QRect &rect) 1066{ 1067 CHECK_X11 1068 const QRect r = rect / qApp->devicePixelRatio(); 1069 1070 KX11Extras::self()->init(INFO_BASIC); 1071 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 1072 QPoint p = r.center(); 1073 // make absolute 1074 p = QPoint(p.x() + s_d->desktopViewport(s_d->currentDesktop(true)).x, p.y() + s_d->desktopViewport(s_d->currentDesktop(true)).y); 1075 NETSize s = s_d->desktopGeometry(); 1076 QSize vs(displayWidth(), displayHeight()); 1077 int xs = s.width / vs.width(); 1078 int x = p.x() < 0 ? 0 : p.x() >= s.width ? xs - 1 : p.x() / vs.width(); 1079 int ys = s.height / vs.height(); 1080 int y = p.y() < 0 ? 0 : p.y() >= s.height ? ys - 1 : p.y() / vs.height(); 1081 return y * xs + x + 1; 1082} 1083 1085 qreal left_width, 1086 qreal left_start, 1087 qreal left_end, 1088 qreal right_width, 1089 qreal right_start, 1090 qreal right_end, 1091 qreal top_width, 1092 qreal top_start, 1093 qreal top_end, 1094 qreal bottom_width, 1095 qreal bottom_start, 1096 qreal bottom_end) 1097{ 1098 CHECK_X11_VOID 1099 const qreal dpr = qApp->devicePixelRatio(); 1100 1101 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::Properties(), NET::Properties2()); 1102 NETExtendedStrut strut; 1103 strut.left_width = std::lround(left_width * dpr); 1104 strut.right_width = std::lround(right_width * dpr); 1105 strut.top_width = std::lround(top_width * dpr); 1106 strut.bottom_width = std::lround(bottom_width * dpr); 1107 strut.left_start = std::lround(left_start * dpr); 1108 strut.left_end = std::lround(left_end * dpr); 1109 strut.right_start = std::lround(right_start * dpr); 1110 strut.right_end = std::lround(right_end * dpr); 1111 strut.top_start = std::lround(top_start * dpr); 1112 strut.top_end = std::lround(top_end * dpr); 1113 strut.bottom_start = std::lround(bottom_start * dpr); 1114 strut.bottom_end = std::lround(bottom_end * dpr); 1115 info.setExtendedStrut(strut); 1116 NETStrut oldstrut; 1117 oldstrut.left = std::lround(left_width * dpr); 1118 oldstrut.right = std::lround(right_width * dpr); 1119 oldstrut.top = std::lround(top_width * dpr); 1120 oldstrut.bottom = std::lround(bottom_width * dpr); 1121 info.setStrut(oldstrut); 1122} 1123 1124void KX11Extras::setStrut(WId win, qreal left, qreal right, qreal top, qreal bottom) 1125{ 1126 CHECK_X11_VOID 1127 const qreal dpr = qApp->devicePixelRatio(); 1128 1129 int w = displayWidth(); 1130 int h = displayHeight(); 1131 setExtendedStrut(win, 1132 std::lround(left * dpr), 1133 0, 1134 std::lround(left * dpr) != 0 ? w : 0, 1135 std::lround(right * dpr), 1136 0, 1137 std::lround(right * dpr) != 0 ? w : 0, 1138 std::lround(top * dpr), 1139 0, 1140 std::lround(top * dpr) != 0 ? h : 0, 1141 std::lround(bottom * dpr), 1142 0, 1143 std::lround(bottom * dpr) != 0 ? h : 0); 1144} 1145 1146// optimalization - create private only when needed and only for what is needed 1147void KX11Extras::connectNotify(const QMetaMethod &signal) 1148{ 1149 CHECK_X11_VOID 1150 FilterInfo what = INFO_BASIC; 1152 what = INFO_WINDOWS; 1153 } else if (signal == QMetaMethod::fromSignal(&KX11Extras::strutChanged)) { 1154 what = INFO_WINDOWS; 1155 } else if (signal == QMetaMethod::fromSignal(&KX11Extras::windowChanged)) { 1156 what = INFO_WINDOWS; 1157 } 1158 1159 init(what); 1160 NETEventFilter *const s_d = s_d_func(); 1161 if (!s_d->strutSignalConnected && signal == QMetaMethod::fromSignal(&KX11Extras::strutChanged)) { 1162 s_d->strutSignalConnected = true; 1163 } 1164 QObject::connectNotify(signal); 1165} 1166 1167void KX11Extras::setType(WId win, NET::WindowType windowType) 1168{ 1169 CHECK_X11_VOID 1170 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::Properties(), NET::Properties2()); 1171 info.setWindowType(windowType); 1172} 1173 1175{ 1176 CHECK_X11_VOID 1177 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMState, NET::Properties2()); 1178 info.setState(state, state); 1179} 1180 1182{ 1183 CHECK_X11_VOID 1184 NETWinInfo info(QX11Info::connection(), win, QX11Info::appRootWindow(), NET::WMState, NET::Properties2()); 1185 info.setState(NET::States(), state); 1186} 1187 1188int KX11Extras::viewportToDesktop(const QPoint &p) 1189{ 1190 CHECK_X11 1191 KX11Extras::self()->init(INFO_BASIC); 1192 NETEventFilter *const s_d = KX11Extras::self()->s_d_func(); 1193 NETSize s = s_d->desktopGeometry(); 1194 QSize vs(displayWidth(), displayHeight()); 1195 int xs = s.width / vs.width(); 1196 int x = p.x() < 0 ? 0 : p.x() >= s.width ? xs - 1 : p.x() / vs.width(); 1197 int ys = s.height / vs.height(); 1198 int y = p.y() < 0 ? 0 : p.y() >= s.height ? ys - 1 : p.y() / vs.height(); 1199 return y * xs + x + 1; 1200} 1201 1202QPoint KX11Extras::constrainViewportRelativePosition(const QPoint &pos) 1203{ 1204 CHECK_X11 1205 init(INFO_BASIC); 1206 NETEventFilter *const s_d = s_d_func(); 1207 NETSize s = s_d->desktopGeometry(); 1208 NETPoint c = s_d->desktopViewport(s_d->currentDesktop(true)); 1209 int x = (pos.x() + c.x) % s.width; 1210 int y = (pos.y() + c.y) % s.height; 1211 if (x < 0) { 1212 x += s.width; 1213 } 1214 if (y < 0) { 1215 y += s.height; 1216 } 1217 return QPoint(x - c.x, y - c.y); 1218} 1219 1220QPoint KX11Extras::desktopToViewport(int desktop, bool absolute) 1221{ 1222 CHECK_X11 1223 init(INFO_BASIC); 1224 NETEventFilter *const s_d = s_d_func(); 1225 NETSize s = s_d->desktopGeometry(); 1226 QSize vs(displayWidth(), displayHeight()); 1227 int xs = s.width / vs.width(); 1228 int ys = s.height / vs.height(); 1229 if (desktop <= 0 || desktop > xs * ys) { 1230 return QPoint(0, 0); 1231 } 1232 --desktop; 1233 QPoint ret(vs.width() * (desktop % xs), vs.height() * (desktop / xs)); 1234 if (!absolute) { 1235 ret = QPoint(ret.x() - s_d->desktopViewport(s_d->currentDesktop(true)).x, ret.y() - s_d->desktopViewport(s_d->currentDesktop(true)).y); 1236 if (ret.x() >= s.width) { 1237 ret.setX(ret.x() - s.width); 1238 } 1239 if (ret.x() < 0) { 1240 ret.setX(ret.x() + s.width); 1241 } 1242 if (ret.y() >= s.height) { 1243 ret.setY(ret.y() - s.height); 1244 } 1245 if (ret.y() < 0) { 1246 ret.setY(ret.y() + s.height); 1247 } 1248 } 1249 return ret; 1250} 1251 1252bool KX11Extras::showingDesktop() 1253{ 1254 KX11Extras::self()->init(INFO_BASIC); 1255 return KX11Extras::self()->s_d_func()->showingDesktop(); 1256} 1257 1258void KX11Extras::setShowingDesktop(bool showing) 1259{ 1260 NETRootInfo info(QX11Info::connection(), NET::Properties(), NET::WM2ShowingDesktop, QX11Info::appScreen()); 1261 info.setShowingDesktop(showing); 1262} 1263 1264#include "kx11extras.moc" 1265#include "moc_kx11extras.cpp" Convenience access to certain properties and features of window systems. void showingDesktopChanged(bool showing) The state of showing the desktop has changed. static KWindowSystem * self() Access to the singleton instance. A collection of functions to obtain information from and manipulate X11 windows. Definition kx11extras.h:29 static bool mapViewport() Returns true if viewports are mapped to virtual desktops. static void setDesktopName(int desktop, const QString &name) Sets the name of the specified desktop. static void setOnAllDesktops(WId win, bool b) Sets window win to be present on all virtual desktops if is true. static void setState(WId win, NET::States state) Sets the state of window win to state. static QString readNameProperty(WId window, unsigned long atom) Function that reads and returns the contents of the given text property (WM_NAME, WM_ICON_NAME,... static bool hasWId(WId id) Test to see if id still managed at present. static QPixmap icon(WId win, int width=-1, int height=-1, bool scale=false) Returns an icon for window win. bool compositingActive Whether desktop compositing is active. Definition kx11extras.h:35 static void forceActiveWindow(WId win, long time=0) Sets window win to be the active window. void windowChanged(WId id, NET::Properties properties, NET::Properties2 properties2) The window changed. static void setStrut(WId win, qreal left, qreal right, qreal top, qreal bottom) Convenience function for setExtendedStrut() that automatically makes struts as wide/high as the scree... void activeWindowChanged(WId id) Hint that <Window> is active (= has focus) now. void numberOfDesktopsChanged(int num) The number of desktops changed. static void minimizeWindow(WId win) Minimizes the window with id win. static QRect workArea(int desktop=-1) Returns the workarea for the specified desktop, or the current work area if no desktop has been speci... static int currentDesktop() Returns the current virtual desktop. static int numberOfDesktops() Returns the number of virtual desktops. static void setCurrentDesktop(int desktop) Convenience function to set the current desktop to desktop. static void unminimizeWindow(WId win) Unminimizes the window with id win. @ NETWM read from property from the window manager specification Definition kx11extras.h:218 @ WMHints read from WMHints property Definition kx11extras.h:219 @ ClassHint load icon after getting name from the classhint Definition kx11extras.h:220 @ XApp load the standard X icon (last fallback) Definition kx11extras.h:221 static void clearState(WId win, NET::States state) Clears the state of window win from state. static QList< WId > stackingOrder() Returns the list of all toplevel windows currently managed by the window manager in the current stack... static WId activeWindow() Returns the currently active window, or 0 if no window is active. static void setExtendedStrut(WId win, qreal left_width, qreal left_start, qreal left_end, qreal right_width, qreal right_start, qreal right_end, qreal top_width, qreal top_start, qreal top_end, qreal bottom_width, qreal bottom_start, qreal bottom_end) Sets the strut of window win to left_width ranging from left_start to left_end on the left edge,... void windowRemoved(WId id) A window has been removed. void strutChanged() Something changed with the struts, may or may not have changed the work area. static QList< WId > windows() Returns the list of all toplevel windows currently managed by the window manager in the order of crea... void desktopNamesChanged() Desktops have been renamed. static void activateWindow(WId win, long time=0) Requests that window win is activated. static void setOnActivities(WId win, const QStringList &activities) Moves window win to activities activities. void stackingOrderChanged() Emitted when the stacking order of the window changed. void windowAdded(WId id) A window has been added. void compositingChanged(bool enabled) Compositing was enabled or disabled. static QString desktopName(int desktop) Returns the name of the specified desktop. void workAreaChanged() The workarea has changed. void currentDesktopChanged(int desktop) Switched to another virtual desktop. static void setOnDesktop(WId win, int desktop) Moves window win to desktop desktop. static void setType(WId win, NET::WindowType windowType) Sets the type of window win to windowType. Common API for root window properties/protocols. Definition netwm.h:41 void setDesktopViewport(int desktop, const NETPoint &viewport) Sets the viewport for the current desktop to the specified point. Definition netwm.cpp:746 int currentDesktop(bool ignore_viewport=false) const Returns the current desktop. Definition netwm.cpp:2491 xcb_window_t activeWindow() const Returns the active (focused) window. Definition netwm.cpp:2499 const xcb_window_t * clientListStacking() const Returns an array of Window id's, which contain all managed windows in stacking order. Definition netwm.cpp:2414 void setDesktopName(int desktop, const char *desktopName) Sets the name of the specified desktop. Definition netwm.cpp:682 void activate() Window Managers must call this after creating the NETRootInfo object, and before using any other meth... Definition netwm.cpp:583 int clientListStackingCount() const Returns the number of managed windows in the clientListStacking array. Definition netwm.cpp:2419 NETPoint desktopViewport(int desktop) const Returns the viewport of the specified desktop. Definition netwm.cpp:2429 NETSize desktopGeometry() const Returns the desktop geometry size. Definition netwm.cpp:2424 const char * desktopName(int desktop) const Returns the name for the specified desktop. Definition netwm.cpp:2449 void setCurrentDesktop(int desktop, bool ignore_viewport=false) Sets the current desktop to the specified desktop. Definition netwm.cpp:660 void event(xcb_generic_event_t *event, NET::Properties *properties, NET::Properties2 *properties2=nullptr) This function takes the passed xcb_generic_event_t and returns the updated properties in the passed i... Definition netwm.cpp:1667 void moveResizeWindowRequest(xcb_window_t window, int flags, int x, int y, int width, int height) Clients (such as pagers/taskbars) that wish to move/resize a window using WM2MoveResizeWindow (_NET_M... Definition netwm.cpp:1591 NETRect workArea(int desktop) const Returns the workArea for the specified desktop. Definition netwm.cpp:2439 bool isSupported(NET::Property property) const Returns true if the given property is supported by the window manager. Definition netwm.cpp:2379 int numberOfDesktops(bool ignore_viewport=false) const Returns the number of desktops. Definition netwm.cpp:2483 bool showingDesktop() const Returns the status of _NET_SHOWING_DESKTOP. Definition netwm.cpp:1558 Common API for application window properties/protocols. Definition netwm.h:967 xcb_connection_t * xcbConnection() const Returns the xcb connection used. Definition netwm.cpp:4881 int desktop(bool ignore_viewport=false) const Returns the desktop where the window is residing. Definition netwm.cpp:4700 void setStrut(NETStrut strut) Definition netwm.cpp:2726 NETIcon icon(int width=-1, int height=-1) const Returns an icon. Definition netwm.cpp:3501 xcb_pixmap_t icccmIconPixmapMask() const Returns the mask for the icon pixmap as set in WM_HINTS. Definition netwm.cpp:4777 void setExtendedStrut(const NETExtendedStrut &extended_strut) Set the extended (partial) strut for the application window. Definition netwm.cpp:2701 NETStrut strut() const Definition netwm.cpp:4613 xcb_pixmap_t icccmIconPixmap() const Returns the icon pixmap as set in WM_HINTS. Definition netwm.cpp:4772 void setDesktop(int desktop, bool ignore_viewport=false) Set which window the desktop is (should be) on. Definition netwm.cpp:3211 static const int OnAllDesktops Sentinel value to indicate that the client wishes to be visible on all desktops. Definition netwm.h:1659 const char * windowClassClass() const Returns the class component of the window class for the window (i.e. Definition netwm.cpp:4782 @ Sticky indicates that the Window Manager SHOULD keep the window's position fixed on the screen,... Definition netwm_def.h:515 WindowType Window type. Definition netwm_def.h:357 @ FromApplication indicates that the request comes from a normal application Definition netwm_def.h:849 @ FromTool indicated that the request comes from pager or similar tool Definition netwm_def.h:853 const char * constData() const const void installNativeEventFilter(QAbstractNativeEventFilter *filterObj) QCoreApplication * instance() QWindow * focusWindow() void screenAdded(QScreen *screen) void screenRemoved(QScreen *screen) QList< QScreen * > screens() QIcon fromTheme(const QString &name) bool isNull() const const QImage scaled(const QSize &size, Qt::AspectRatioMode aspectRatioMode, Qt::TransformationMode transformMode) const const void append(QList< T > &&value) const_reference at(qsizetype i) const const iterator begin() void clear() const_iterator constBegin() const const const_iterator constEnd() const const bool contains(const AT &value) const const qsizetype count() const const iterator end() iterator erase(const_iterator begin, const_iterator end) qsizetype removeAll(const AT &t) QMetaMethod fromSignal(PointerToMemberFunction signal) bool invokeMethod(QObject *context, Functor &&function, FunctorReturnType *ret) Q_EMITQ_EMIT Q_INVOKABLEQ_INVOKABLE Q_OBJECTQ_OBJECT QMetaObject::Connection connect(const QObject *sender, PointerToMemberFunction signal, Functor functor) virtual void connectNotify(const QMetaMethod &signal) bool disconnect(const QMetaObject::Connection &connection) QThread * thread() const const QString tr(const char *sourceText, const char *disambiguation, int n) QPixmap fromImage(QImage &&image, Qt::ImageConversionFlags flags) int height() const const bool isNull() const const QImage toImage() const const int width() const const int x() const const int y() const const int bottom() const const int height() const const QRect intersected(const QRect &rectangle) const const int left() const const int right() const const void setBottom(int y) void setLeft(int x) void setRight(int x) void setTop(int y) QSize size() const const int top() const const QPoint topLeft() const const int width() const const QRect boundingRect() const const void geometryChanged(const QRect &geometry) bool isNull() const const QString arg(Args &&... args) const const QString fromLocal8Bit(QByteArrayView str) QString fromUtf8(QByteArrayView str) QString toLower() const const QByteArray toUtf8() const const QString join(QChar separator) const const IgnoreAspectRatio BlockingQueuedConnection SmoothTransformation QFuture< void > filter(QThreadPool *pool, Sequence &sequence, KeepFunctor &&filterFunction) WId winId() const const Partial strut class for NET classes. Definition netwm_def.h:180 int bottom_width Bottom border of the strut, width and range. Definition netwm_def.h:218 int left_width Left border of the strut, width and range. Definition netwm_def.h:203 int right_width Right border of the strut, width and range. Definition netwm_def.h:208 int top_width Top border of the strut, width and range. Definition netwm_def.h:213 Simple icon class for NET classes. Definition netwm_def.h:147 NETSize size Size of the icon. Definition netwm_def.h:161 unsigned char * data Image data for the icon. Definition netwm_def.h:168 Simple point class for NET classes. Definition netwm_def.h:27 int x x coordinate. Definition netwm_def.h:51 int y y coordinate Definition netwm_def.h:52 Simple rectangle class for NET classes. Definition netwm_def.h:105 NETPoint pos Position of the rectangle. Definition netwm_def.h:126 NETSize size Size of the rectangle. Definition netwm_def.h:133 Simple size class for NET classes. Definition netwm_def.h:68 int height Height. Definition netwm_def.h:92 int width Width. Definition netwm_def.h:91 int bottom Bottom border of the strut. Definition netwm_def.h:262 int left Left border of the strut. Definition netwm_def.h:247 int right Right border of the strut. Definition netwm_def.h:252 int top Top border of the strut. Definition netwm_def.h:257 This file is part of the KDE documentation. Documentation copyright © 1996-2024 The KDE developers. Generated on Fri Sep 13 2024 11:52:05 by doxygen 1.12.0 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
__label__pos
0.838014
LWN.net Logo How were the moved/renamed files accounted for ? How were the moved/renamed files accounted for ? Posted Mar 2, 2010 12:47 UTC (Tue) by nye (guest, #51576) In reply to: How were the moved/renamed files accounted for ? by jnareb Parent article: How old is our kernel? So -M (detect renames) seems from a user's point of view to act differently depending on whether you're trying to log a file or the whole project. This seems like a good, specific example of one of those usability issues people are always handwaving about. (Log in to post comments) Copyright © 2013, Eklektix, Inc. Comments and public postings are copyrighted by their creators. Linux is a registered trademark of Linus Torvalds
__label__pos
0.725946
If n balls are thrown into k bins, what is the probability that every bin gets at least one ball? If $n$ balls are thrown into $k$ bins (uniformly at random and independently), what is the probability that every bin gets at least one ball? i.e. If we write $X$ for the number of empty bins, what is $P(X=0)$? I was able to calculate the $E(X)$ and thus bound with Markov’s inequality $P(X>=1) \le E(X)$ but I don’t how to work out an exact answer. http://www.inference.phy.cam.ac.uk/mackay/itprnn/ps/588.596.pdf Solutions Collecting From Web of "If n balls are thrown into k bins, what is the probability that every bin gets at least one ball?" What is the chance that all $k$ bins are occupied? For $1\leq i\leq k$, define $A_i$ to be the event that the $i$th bin stays empty. These are exchangeable events with $P(A_1\cdots A_j)=(1-{j\over k})^n$ and so by inclusion-exclusion, the probability that there are no empty bins is $$P(X=0)=\sum_{j=0}^k (-1)^j {k\choose j}\left(1-{j\over k}\right)^n.$$ Stirling numbers of the second kind can be used to give an alternative solution to the occupancy problem. We can fill all $k$ bins as follows: partition the balls $\{1,2,\dots, n\}$ into $k$ non-empty sets, then assign the bin values $1,2,\dots, k$ to these sets. There are ${n\brace k}$ partitions, and for each partition $k!$ ways to assign the bin values. Thus, $$P(X=0)={{n\brace k}\,k!\over k^n}.$$ I propose to use combinatorics. Namely, stars and bars formulae. 1. Number of outcomes with at least one ball in every bin is $\tbinom{n – 1}{k-1}.$ 2. Number of outcomes with any number of balls in every bin is $\tbinom{n + k – 1}{n}.$ Now just divide. To count the outcomes for this question, using inclusion-exclusion formula is correct, but $n$ choose $k$ and then multiplied by $k!$ is only correct if the question is asking that “each bin has only one ball”. If n balls are thrown into $n$ bins, then, a simple answer is $n!$. So the question asking at least one ball is complicated. We have to write down each possible way and sum up each combination, just like another question asking about each day per week has at least one call. If we have exact numbers, we can count the numbers of outcomes, but here, it is better to ask at least one ball with the setting of $n$ balls thrown into $n$ bins. Set $X_i = 1$ if there is at least 1 ball in the $\textit{i}$th bin; and $0$ otherwise [$\textit{i}$ goes from 1 to k]. Then this question is asking what is $P[\sum\limits_{i=1}^k X_i = k]$. Given that each $X_i$s are independent of each other, $P[\sum\limits_{i=1}^k X_i=k]) = (P[X_i=1])^k =(1-P[X_i=0])^k =(1-(\frac{k-1}{k})^n)^k$
__label__pos
0.999903
PNWPHP 2017 CfP streamWrapper::url_stat (PHP 4 >= 4.3.2, PHP 5) streamWrapper::url_statRetrieve information about a file Elenco dei parametri path The file path or URL to stat. Note that in the case of a URL, it must be a :// delimited URL. Other URL forms are not supported. flags Holds additional flags set by the streams API. It can hold one or more of the following values OR'd together. Flag Description STREAM_URL_STAT_LINK For resources with the ability to link to other resource (such as an HTTP Location: forward, or a filesystem symlink). This flag specified that only information about the link itself should be returned, not the resource pointed to by the link. This flag is set in response to calls to lstat(), is_link(), or filetype(). STREAM_URL_STAT_QUIET If this flag is set, your wrapper should not raise any errors. If this flag is not set, you are responsible for reporting errors using the trigger_error() function during stating of the path. Valori restituiti Should return as many elements as stat() does. Unknown or unavailable values should be set to a rational value (usually 0). Errori/Eccezioni Emits E_WARNING if call to this method fails (i.e. not implemented). Note Nota: The streamWrapper::$context property is updated if a valid context is passed to the caller function. Vedere anche: add a note add a note User Contributed Notes There are no user contributed notes for this page. To Top
__label__pos
0.518122
blob: 8a2117e1624f937734e44bae11557ef57709621a [file] [log] [blame] /* { dg-do run { target openacc_nvidia_accel_selected } } */ /* { dg-options "-foffload=-fdump-rtl-mach" } */ /* { dg-skip-if "" { *-*-* } { "*" } { "-O2" } } */ #define N (32*32*32+17) void __attribute__ ((noinline)) Foo (int *ary) { int ix; #pragma acc parallel num_workers(32) vector_length(32) copyout(ary[0:N]) { /* Loop partitioning should be merged. */ #pragma acc loop worker vector for (unsigned ix = 0; ix < N; ix++) { ary[ix] = ix; } } } int main () { int ary[N]; Foo (ary); return 0; } /* { dg-final { scan-offload-rtl-dump "Merging loop .* into " "mach" } } */
__label__pos
0.999997
10 I'm using the Android Amplify library. I am having trouble finding out what kind of error would be passed back from the Amplify.Auth.signIn() function. I'm not finding the documentation for this anywhere. Right now I am just kind of guessing as to what it will return. What I want is to tell the user how to recover from the error. Does the username not exist, was the password incorrect, was it of bad format, etc. Reading the source code I am given the impression that AmplifyException.recoveryMessage is what I want but that would still be problematic as it doesn't allow me to customize the message. /** * Sign in the user to the back-end service and set the currentUser for this application * @param username User's username * @param password User's password */ override fun initiateSignin(username : String, password : String) { //Sign in the user to the AWS back-end Amplify.Auth.signIn( username, password, {result -> if (result.isSignInComplete) { Timber.tag(TAG).i("Sign in successful.") //Load the user if the sign in was successful loadUser() } else { Timber.tag(TAG).i("Sign in unsuccessful.") //TODO: I think this will happen if the password is incorrect? } }, {error -> Timber.tag(UserLogin.TAG).e(error.toString()) authenticationRecoveryMessage.value = error.recoverySuggestion } ) } Authentication recovery message is LiveData that I want to update a snackbar which will tell the user what they need to do for a successful login. I feel there must be some way to get the error from this that I just haven't figured out yet. The ideal way to handle messages to the user is with XML strings for translation possibilities so I would really like to use my own strings in the snackbar but I need to know the things that can go wrong with sign-up and what is being communicated to me through the error -> {} callback. 2 • Any luck? I am looking for same. The docs are really disappointing. – burntsugar Commented Jan 5, 2021 at 10:18 • 1 @burntsugar No sadly. I would suggest reading the class of error in the callback then working backwards to see the general idea of exceptions that could be thrown. Finally, just test the system and see what is thrown under certain circumstances. This is the best I could do for the time being. They are commented in the source it's just there is not a lot of information on what will cause which to be thrown. Commented Jan 5, 2021 at 15:47 4 Answers 4 6 I couldn't find them in the documentation myself, so i decided to log the possibles cases. try { const signInResult = await Auth.signIn({ username: emailOrPhoneNumber, password }); const userId = signInResult.attributes.sub; const token = (await Auth.currentSession()).getAccessToken().getJwtToken(); console.log(userId, 'token: ', token); resolve(new AuthSession(userId, token, false)); } catch (e) { switch (e.message) { case 'Username should be either an email or a phone number.': reject(`${AuthError.usernameInvalid}: ${e.message}`); break; case 'Password did not conform with policy: Password not long enough': reject(`${AuthError.passwordTooShort}: ${e.message}`); break; case 'User is not confirmed.': reject(`${AuthError.userIsNotConfirmed}: ${e.message}`); break; case 'Incorrect username or password.': reject(`${AuthError.incorrectUsernameOrPassword}: ${e.message}`); break; case 'User does not exist.': reject(`${AuthError.userDoesNotExist}: ${e.message}`); break; default: reject(`${AuthError.unknownError}: ${e.message}`); } } 5 • 1 Thanks, I think this is the best we can do for now. This is similar to what I ended up doing in Android. Commented Jan 11, 2021 at 14:07 • @chrisdottel no worries! Guess cognito should provide an enum containing all the auth errors. Commented Jan 11, 2021 at 14:23 • do you have similar list for signup function? i could find UsernameExistsException & InvalidPasswordException Commented Feb 10, 2022 at 8:34 • @KaustuvPrajapati, I can check, Do you need one for sign up only? Commented Feb 11, 2022 at 8:58 • I would be great if there is a list for which exceptions could occur from server on different amplify function such as Amplify.Auth.signup() or .signIn() or .logout(). Its not clear in doc that what exceptions could occur on all of the these functions. Commented Feb 11, 2022 at 12:13 5 SignIn uses Cognito's InitiateAuth under the hood, so error codes can be found here: https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_InitiateAuth.html#API_InitiateAuth_Errors They are available in the code field of the error. 1 • Both code and name work for this Commented Nov 29, 2021 at 11:37 0 You can use this switch case for Auth.signIn() catch (error) { let errorMessage; switch (error.name) { case 'UserNotFoundException': errorMessage = 'User not found. Check email/username.'; break; case 'NotAuthorizedException': errorMessage = 'Incorrect password. Try again.'; break; case 'PasswordResetRequiredException': errorMessage = 'Password reset required. Check email.'; break; case 'UserNotConfirmedException': errorMessage = 'User not confirmed. Verify email.'; break; case 'CodeMismatchException': errorMessage = 'Invalid confirmation code. Retry.'; break; case 'ExpiredCodeException': errorMessage = 'Confirmation code expired. Resend code.'; break; case 'InvalidParameterException': errorMessage = 'Invalid input. Check credentials.'; break; case 'InvalidPasswordException': errorMessage = 'Invalid password. Follow policy.'; break; case 'TooManyFailedAttemptsException': errorMessage = 'Too many failed attempts. Wait.'; break; case 'TooManyRequestsException': errorMessage = 'Request limit reached. Wait and retry.'; break; case 'LimitExceededException': errorMessage = 'User pool full. Retry later.'; break; default: errorMessage = 'Unknown error. Contact support.'; } return rejectWithValue(error.message); } 1 • 1 As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. – Community Bot Commented Apr 2, 2023 at 17:54 0 import {signIn} from ''aws-amplify/auth'; try { const output = await signIn({ username, password }); return output; } catch (err: any) { if (err.name === 'NotAuthorizedException') { console.error('User is not authorized. Check the username and password.'); } console.log('error signing in: ', err.name, err.message); } Your Answer By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.680056
Hi. need help with constraints.. stuck :( sqlite> .open test3.db sqlite> PRAGMA foreign_keys=ON; sqlite> sqlite> CREATE TABLE ARTIST (Artistid INTEGER PRIMARY KEY, Artist_Name TEXT NOT NULL); sqlite> sqlite> CREATE TABLE TRACK (Trackid INTEGER PRIMARY KEY, Artistid INTEGER, Track_Name TEXT, Artist_Name TEXT NOT NULL, Release_Date TEXT, ...> CONSTRAINT Trackid_Artistid_PK PRIMARY KEY (Trackid, Artistid), ...> CONSTRAINT Trackid_FK FOREIGN KEY (Trackid) REFERENCES TRACK (Trackid), ...> CONSTRAINT Artistid_FK FOREIGN KEY (Artistid) REFERENCES ARTIST (Artistid)); Error: table "TRACK" has more than one primary key sqlite> sqlite> CREATE TABLE RADIO_SHOWS (RadioShowID INTEGER PRIMARY KEY, Artistid INTEGER, Artist_Name TEXT, Radio_Show_Name TEXT, DATE TEXT, ...> CONSTRAINT Radioshowid_Artistid_PK PRIMARY KEY (RadioShowID, Artistid), ...> CONSTRAINT RadioShowID_FK FOREIGN KEY (RadioShowID) REFERENCES RADIO_SHOWS (RadioShowID), ...> CONSTRAINT Artistid_FK FOREIGN KEY (Artistid) REFERENCES ARTIST (Artistid)); Error: table "RADIO_SHOWS" has more than one primary key sqlite> sqlite> CREATE TABLE GENRES (Styleid INTEGER PRIMARY KEY, Style_Name TEXT); sqlite> sqlite> CREATE TABLE USER (Userid INTEGER PRIMARY KEY, User_Name TEXt, country TEXT); sqlite> sqlite> CREATE TABLE PERSONAL_INFO (Name TEXT, Screen_Name TEXT, age INTEGER, gender TEXT); sqlite> sqlite> CREATE TABLE FAVS (Favoriteid INTEGER PRIMARY KEY, Favorite_Name TEXT, Favorite_Genre TEXT, YAY TEXT, NAY TEXT); sqlite> sqlite> CREATE TABLE LISTEN_server (Styleid INTEGER, Style_Name TEXT, Artistid INTEGER, Artist_Name TEXT, Trackid INTEGER, Track_Name TEXT, ...> CONSTRAINT Styleid_Artistid_Trackid_PK PRIMARY KEY (Styleid, artistid, trackid), ...> CONSTRAINT Styleid_FK FOREIGN KEY (Styleid) REFERENCES GENRES (Styleid, ...> CONSTRAINT Artistid_FK FOREIGN KEY (artistid) REFERENCES ARTIST (Artistid), ...> CONSTRAINT Trackid_FK FOREIGN KEY (trackid) REFERENCES TRACK (Trackid)); Error: near "CONSTRAINT": syntax error sqlite> sqlite> CREATE TABLE SETTINGS (settingsid INTEGER PRIMARY KEY, settings_name TEXT, USAGE INTEGER); bump Hello, Please notice that this is a Microsoft SQL Server site so you may not find many people who know SQLite.
__label__pos
0.969085
Java Solution - 33. Search in Rotated Sorted Array • 0 L class Solution { public int search(int[] nums, int target) { if(nums.length == 0) return -1; int l = 0, r = nums.length-1; while(l <= r){ int m = (l+r)/2; if(nums[m] == target){ return m; }else if(nums[m]<nums[r]){ if(nums[m] < target && nums[r] >= target) l = m+1; else r = m-1; }else{ if(nums[l] <= target && nums[m] > target) r = m-1; else l = m+1; } } return -1; } } Log in to reply   Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
__label__pos
0.998504
平均每5分钟,就有一个账号成交 海量优质账号资源,平台担保交易 抖音权重号转让价格_抖音权重号转让网站 您现在的位置:首页>焦点百科>抖音权重号转让价格_抖音权重号转让网站 抖音权重号转让价格_抖音权重号转让网站 我可以帮您撰写这篇关于“抖音权重号转让价格”的文章。以下是这篇文章的写作大纲,我将按照这个大纲为您撰写完整的文章。 --- **标题:揭秘抖音权重号转让背后的价格密码** ** 咨询顾问 1815 条评论 33 点赞 分享 2024-07-11 我可以帮您撰写这篇关于“抖音权重号转让价格”的文章。以下是这篇文章的写作大纲,我将按照这个大纲为您撰写完整的文章。 --- **标题:揭秘抖音权重号转让背后的价格密码** **引言:** 在自媒体时代,抖音账号逐渐成为一种炙手可热的数字资源,其权重号的转让价格更是备受关注。想必您也好奇过,这些号到底值多少钱?本文将从不同角度为您揭示抖音权重号转让价格的奥秘。 **大纲要点:** **1. 影响抖音权重号价格的因素:** 1.1 粉丝数量的重要性 1.2 发布内容质量的评估 1.3 账号活跃度的影响 **2. 购买抖音权重号的潜在风险:** 2.1 可能存在的欺诈风险 2.2 法律法规的合规性 2.3 避免账号冻结的方法 **3. 选择权重号转让平台的建议:** 3.1 信誉和口碑的重要性 3.2 平台的服务保障措施 3.3 用户体验和客户评价 **4. 如何评估抖音权重号的真实价值:** 4.1 行业分析和趋势预测 4.2 网红影响力的评估 4.3 交易的灵活性和后续服务 --- 通过深入分析上述要点,本文将帮助您更全面地了解抖音权重号转让价格背后的机制,同时也将引导您在论坛上进行更加明智和理性的自媒体账号交易。让我们一起揭开这场数字资源交易的神秘面纱! 展开阅读全文 该回答被网友采纳 如果该回复不能解决您的需求,可 咨询答主 进行详细解答 站内部分内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任如发现本站有涉嫌抄袭侵权/违法违规的内容。请联系我们,一经核实立即删除。并对发布账号进行永久封禁处理. 本文网址:/show-14-42225.html 复制 万粉号源 / 安全保障 我要买号 我要卖号 复制成功
__label__pos
0.995029
高效Java编程-13. 谨慎地重写 clone 方法   Cloneable 接口的目的是作为一个 mixin 接口 (详见第 20 条),公布这样的类允许克隆。不幸的是,它没有达到这个目的。它的主要缺点是缺少 clone 方法,而 Object 的 clone 方法是受保护的。你不能,不借助反射 (详见第 65 条),仅仅因为它实现了 Cloneable 接口,就调用对象上的 clone 方法。即使是反射调用也可能失败,因为不能保证对象具有可访问的 clone 方法。尽管存在许多缺陷,该机制在合理的范围内使用,所以理解它是值得的。这个条目告诉你如何实现一个行为良好的 clone 方法,在适当的时候讨论这个方法,并提出替代方案。   既然 Cloneable 接口不包含任何方法,那它用来做什么? 它决定了 Object 的受保护的 clone 方法实现的行为:如果一个类实现了 Cloneable 接口,那么 Object 的 clone 方法将返回该对象的逐个属性(field-by-field)拷贝;否则会抛出 CloneNotSupportedException 异常。这是一个非常反常的接口使用,而不应该被效仿。 通常情况下,实现一个接口用来表示可以为客户做什么。但对于 Cloneable 接口,它会修改父类上受保护方法的行为。   虽然规范并没有说明,但在实践中,实现 Cloneable 接口的类希望提供一个正常运行的公共 clone 方法。为了实现这一目标,该类及其所有父类必须遵循一个复杂的、不可执行的、稀疏的文档协议。由此产生的机制是脆弱的、危险的和不受语言影响的(extralinguistic):它创建对象而不需要调用构造方法。   clone 方法的通用规范很薄弱的。 以下内容是从 Object 规范中复制出来的:   创建并返回此对象的副本。 「复制(copy)」的确切含义可能取决于对象的类。 一般意图是,对于任何对象 x,表达式 x.clone() != x 返回 true,并且 x.clone().getClass() == x.getClass() 也返回 true,但它们不是绝对的要求,但通常情况下,x.clone().equals(x) 返回 true,当然这个要求也不是绝对的。   根据约定,这个方法返回的对象应该通过调用 super.clone 方法获得的。 如果一个类和它的所有父类(Object 除外)都遵守这个约定,情况就是如此,x.clone().getClass() == x.getClass()   根据约定,返回的对象应该独立于被克隆的对象。 为了实现这种独立性,在返回对象之前,可能需要修改由 super.clone 返回的对象的一个或多个属性。   这种机制与构造方法链(chaining)很相似,只是它没有被强制执行;如果一个类的 clone 方法返回一个通过调用构造方法获得而不是通过调用 super.clone 的实例,那么编译器不会抱怨,但是如果一个类的子类调用了 super.clone,那么返回的对象包含错误的类,从而阻止子类 clone 方法正常执行。如果一个类重写的 clone 方法是有 final 修饰的,那么这个约定可以被安全地忽略,因为子类不需要担心。但是,如果一个 final 类有一个不调用 super.clone 的 clone 方法,那么这个类没有理由实现 Cloneable 接口,因为它不依赖于 Object 的 clone 实现的行为。   假设你希望在一个类中实现 Cloneable 接口,它的父类提供了一个行为良好的 clone 方法。首先调用 super.clone。 得到的对象将是原始的完全功能的复制品。 在你的类中声明的任何属性将具有与原始属性相同的值。 如果每个属性包含原始值或对不可变对象的引用,则返回的对象可能正是你所需要的,在这种情况下,不需要进一步的处理。 例如,对于条目 11 中的 PhoneNumber 类,情况就是这样,但是请注意,不可变类永远不应该提供 clone 方法,因为这只会浪费复制。 有了这个警告,以下是 PhoneNumber 类的 clone 方法: // Clone method for class with no references to mutable state @Override public PhoneNumber clone() { try { return (PhoneNumber) super.clone(); } catch (CloneNotSupportedException e) { throw new AssertionError(); // Can't happen } }   为了使这个方法起作用,PhoneNumber 的类声明必须被修改,以表明它实现了 Cloneable 接口。 虽然 Object 类的 clone 方法返回 Object 类,但是这个 clone 方法返回 PhoneNumber 类。 这样做是合法和可取的,因为 Java 支持协变返回类型。 换句话说,重写方法的返回类型可以是重写方法的返回类型的子类。 这消除了在客户端转换的需要。 在返回之前,我们必须将 Object 的 super.clone 的结果强制转换为 PhoneNumber,但保证强制转换成功。   super.clone 的调用包含在一个 try-catch 块中。 这是因为 Object 声明了它的 clone 方法来抛出 CloneNotSupportedException 异常,这是一个检查时异常。 由于 PhoneNumber 实现了 Cloneable 接口,所以我们知道调用 super.clone 会成功。 这里引用的需要表明 CloneNotSupportedException 应该是未被检查的(详见第 71条)。   如果对象包含引用可变对象的属性,则前面显示的简单 clone 实现可能是灾难性的。 例如,考虑条目 7 中的 Stack 类: public class Stack { private Object[] elements; private int size = 0; private static final int DEFAULT_INITIAL_CAPACITY = 16; public Stack() { this.elements = new Object[DEFAULT_INITIAL_CAPACITY]; } public void push(Object e) { ensureCapacity(); elements[size++] = e; } public Object pop() { if (size == 0) throw new EmptyStackException(); Object result = elements[--size]; elements[size] = null; // Eliminate obsolete reference return result; } // Ensure space for at least one more element. private void ensureCapacity() { if (elements.length == size) elements = Arrays.copyOf(elements, 2 * size + 1); } }   假设你想让这个类可以克隆。 如果 clone 方法仅返回 super.clone() 调用的对象,那么生成的 Stack 实例在其 size 属性中具有正确的值,但 elements 属性引用与原始 Stack 实例相同的数组。 修改原始实例将破坏克隆中的不变量,反之亦然。 你会很快发现你的程序产生了无意义的结果,或者抛出 NullPointerException 异常。   这种情况永远不会发生,因为调用 Stack 类中的唯一构造方法。 实际上,clone 方法作为另一种构造方法; 必须确保它不会损坏原始对象,并且可以在克隆上正确建立不变量。 为了使 Stack 上的 clone 方法正常工作,它必须复制 stack 对象的内部。 最简单的方法是对元素数组递归调用 clone 方法: // Clone method for class with references to mutable state @Override public Stack clone() { try { Stack result = (Stack) super.clone(); result.elements = elements.clone(); return result; } catch (CloneNotSupportedException e) { throw new AssertionError(); } }   请注意,我们不必将 elements.clone 的结果转换为 Object[] 数组。 在数组上调用 clone 会返回一个数组,其运行时和编译时类型与被克隆的数组相同。 这是复制数组的首选习语。 事实上,数组是 clone 机制的唯一有力的用途。   还要注意,如果 elements 属性是 final 的,则以前的解决方案将不起作用,因为克隆将被禁止向该属性分配新的值。 这是一个基本的问题:像序列化一样,Cloneable 体系结构与引用可变对象的 final 属性的正常使用不兼容,除非可变对象可以在对象和其克隆之间安全地共享。 为了使一个类可以克隆,可能需要从一些属性中移除 final 修饰符。   仅仅递归地调用 clone 方法并不总是足够的。 例如,假设您正在为哈希表编写一个 clone 方法,其内部包含一个哈希桶数组,每个哈希桶都指向「键-值」对链表的第一项。 为了提高性能,该类实现了自己的轻量级单链表,而没有使用 java 内部提供的 java.util.LinkedList public class HashTable implements Cloneable { private Entry[] buckets = ...; private static class Entry { final Object key; Object value; Entry next; Entry(Object key, Object value, Entry next) { this.key = key; this.value = value; this.next = next; } } ... // Remainder omitted }   假设你只是递归地克隆哈希桶数组,就像我们为 Stack 所做的那样: // Broken clone method - results in shared mutable state! @Override public HashTable clone() { try { HashTable result = (HashTable) super.clone(); result.buckets = buckets.clone(); return result; } catch (CloneNotSupportedException e) { throw new AssertionError(); } }   虽然被克隆的对象有自己的哈希桶数组,但是这个数组引用与原始数组相同的链表,这很容易导致克隆对象和原始对象中的不确定性行为。 要解决这个问题,你必须复制包含每个桶的链表。 下面是一种常见的方法: // Recursive clone method for class with complex mutable state public class HashTable implements Cloneable { private Entry[] buckets = ...; private static class Entry { final Object key; Object value; Entry next; Entry(Object key, Object value, Entry next) { this.key = key; this.value = value; this.next = next; } // Recursively copy the linked list headed by this Entry Entry deep() { return new Entry(key, value, next == null ? null : next.deep()); } } @Override public HashTable clone() { try { HashTable result = (HashTable) super.clone(); result.buckets = new Entry[buckets.length]; for (int i = 0; i < buckets.length; i++) if (buckets[i] != null) result.buckets[i] = buckets[i].deep(); return result; } catch (CloneNotSupportedException e) { throw new AssertionError(); } } ... // Remainder omitted }   私有类 HashTable.Entry 已被扩充以支持「深度复制」方法。 HashTable 上的 clone 方法分配一个合适大小的新哈希桶数组,迭代原来哈希桶数组,深度复制每个非空的哈希桶。 Entry 上的 deep 方法递归地调用它自己以复制由头节点开始的整个链表。 如果哈希桶不是太长,这种技术很聪明并且工作正常。但是,克隆链表不是一个好方法,因为它为列表中的每个元素消耗一个栈帧(stack frame)。 如果列表很长,这很容易导致堆栈溢出。 为了防止这种情况发生,可以用迭代来替换 deep 中的递归: // Iteratively copy the linked list headed by this Entry Entry deep() { Entry result = new Entry(key, value, next); for (Entry p = result; p.next != null; p = p.next) p.next = new Entry(p.next.key, p.next.value, p.next.next); return result; }   克隆复杂可变对象的最后一种方法是调用 super.clone,将结果对象中的所有属性设置为其初始状态,然后调用更高级别的方法来重新生成原始对象的状态。 以 HashTable 为例,bucket 属性将被初始化为一个新的 bucket 数组,并且 put(key, value) 方法(未示出)被调用用于被克隆的哈希表中的键值映射。 这种方法通常产生一个简单,合理的优雅 clone 方法,其运行速度不如直接操纵克隆内部的方法快。 虽然这种方法是干净的,但它与整个 Cloneable 体系结构是对立的,因为它会盲目地重写构成体系结构基础的逐个属性对象复制。   与构造方法一样,clone 方法绝对不可以在构建过程中,调用一个可以重写的方法(详见第 19 条)。如果 clone 方法调用一个在子类中重写的方法,则在子类有机会在克隆中修复它的状态之前执行该方法,很可能导致克隆和原始对象的损坏。因此,我们在前面讨论的 put(key, value) 方法应该时 final 或 private 修饰的。(如果时 private 修饰,那么大概是一个非 final 公共方法的辅助方法)。   Object 类的 clone 方法被声明为抛出 CloneNotSupportedException 异常,但重写方法时不需要。 公共 clone 方法应该省略 throws 子句,因为不抛出检查时异常的方法更容易使用(详见第 71 条)。   在为继承设计一个类时(详见第 19 条),通常有两种选择,但无论选择哪一种,都不应该实现 Clonable 接口。你可以选择通过实现正确运行的受保护的 clone 方法来模仿 Object 的行为,该方法声明为抛出 CloneNotSupportedException 异常。 这给了子类实现 Cloneable 接口的自由,就像直接继承 Object 一样。 或者,可以选择不实现工作的 clone 方法,并通过提供以下简并 clone 实现来阻止子类实现它: // clone method for extendable class not supporting Cloneable @Override protected final Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); }   还有一个值得注意的细节。 如果你编写一个实现了 Cloneable 的线程安全的类,记得它的 clone 方法必须和其他方法一样(详见第 78 条)需要正确的同步。 Object 类的 clone 方法是不同步的,所以即使它的实现是令人满意的,也可能需要编写一个返回 super.clone() 的同步 clone 方法。   回顾一下,实现 Cloneable 的所有类应该重写公共 clone 方法,而这个方法的返回类型是类本身。 这个方法应该首先调用 super.clone,然后修复任何需要修复的属性。 通常,这意味着复制任何包含内部「深层结构」的可变对象,并用指向新对象的引用来代替原来指向这些对象的引用。虽然这些内部拷贝通常可以通过递归调用 clone 来实现,但这并不总是最好的方法。 如果类只包含基本类型或对不可变对象的引用,那么很可能是没有属性需要修复的情况。 这个规则也有例外。 例如,表示序列号或其他唯一 ID 的属性即使是基本类型的或不可变的,也需要被修正。   这么复杂是否真的有必要?很少。 如果你继承一个已经实现了 Cloneable 接口的类,你别无选择,只能实现一个行为良好的 clone 方法。 否则,通常你最好提供另一种对象复制方法。 对象复制更好的方法是提供一个复制构造方法或复制工厂。 复制构造方法接受参数,其类型为包含此构造方法的类,例如: // constructor public Yum(Yum yum) ;   复制工厂类似于复制构造方法的静态工厂: // factory public static Yum newInstance(Yum yum) ;   复制构造方法及其静态工厂变体与 Cloneable/clone 相比有许多优点:它们不依赖风险很大的语言外的对象创建机制;不要求遵守那些不太明确的惯例;不会与 final 属性的正确使用相冲突; 不会抛出不必要的检查异常; 而且不需要类型转换。   此外,复制构造方法或复制工厂可以接受类型为该类实现的接口的参数。 例如,按照惯例,所有通用集合实现都提供了一个构造方法,其参数的类型为 Collection 或 Map。 基于接口的复制构造方法和复制工厂(更适当地称为转换构造方法和转换工厂)允许客户端选择复制的实现类型,而不是强制客户端接受原始实现类型。 例如,假设你有一个 HashSet,并且你想把它复制为一个 TreeSet。 clone 方法不能提供这种功能,但使用转换构造方法很容易:new TreeSet<>(s)   考虑到与 Cloneable 接口相关的所有问题,新的接口不应该继承它,新的可扩展类不应该实现它。 虽然实现 Cloneable 接口对于 final 类没有什么危害,但应该将其视为性能优化的角度,仅在极少数情况下才是合理的(详见第 67 条)。 通常,复制功能最好由构造方法或工厂提供。 这个规则的一个明显的例外是数组,它最好用 clone 方法复制。 文章列表 更多推荐 更多 • Spark编程-结构化流式编程指南 概述,简单例子,编程模型,使用 Dataset 和 DataFrame 的API,连续处理,额外信息,基本概念,处理 Eventtime 和 Late Data,faulttolerance 语义,创建流式 DataFrame 和流式 • Spark编程-20 Spark 配置Spark 属性,Environment Variables环境变量,Configuring Logging配置 Logging,Overriding configuration directory覆盖配置目录,Inhe • Spark编程-在Mesos上运行Spark 运行原理,安装 Mesos,连接 Spark 到 Mesos,Mesos 运行模式,Mesos Docker 支持,集成 Hadoop 运行,使用 Mesos 动态分配资源,配置,故障排查和调试,从源码安装,第三方软件包,验证,上传 S • Spark编程-Running Spark on YARN 启动 Spark on YARN,准备,配置,调试应用,在安全集群中运行,添加其他的 JARs,配置外部的 Shuffle Service,用 Apache Oozie 来运行应用程序,Kerberos 故障排查,使用 Spark Hi • Spark编程-Spark 调优 数据序列化,内存调优,其它考虑,,内存管理概论,确定内存消耗,优化数据结构,序列化 RDD 存储,GC优化,并行级别,Reduce任务内存使用,广播大变量,数据局部性, 由于大多数Spark计算都在内存中,所以集群中的任何资源(C • Spark编程-Spark Standalone模式 安装 Spark Standalone 集群,手动启动一个集群,集群启动脚本,提交应用程序到集群中,启动 Spark 应用程序,Resource Scheduling资源调度,监控和日志,与 Hadoop 集成,配置网络安全端口,高可用 • Spark编程-Monitoring and Instrumentation Web 界面,Metrics,高级工具,事后查看,REST API,环境变量,Spark配置选项,API 版本控制策略, 有几种方法来监视 Spark 应用程序:Web UI,metrics 和外部工具。 Web 界面每 • Spark编程-Spark提交任务Submitting Applications 打包应用依赖,用 sparksubmit 启动应用,Master URLs,从文件中加载配置,高级的依赖管理,更多信息, 在 script in Spark的 `bin` 目录中的`spark-submit` 脚本用与在集群上启动 • Spark编程-作业调度 概述,跨应用调度,应用内调度,动态资源分配,公平调度资源池,资源池默认行为,配置资源池属性,配置和部署,资源分配策略,优雅的关闭Executor(执行器),概述Spark 有好几计算资源调度的方式。首先,回忆一下 [集群 • Spark编程-Spark 概述 安全,下载,运行示例和 Shell,在集群上运行,进一步学习链接, Apache Spark 是一个快速的,通用的集群计算系统。它对 Java,Scala,Python 和 R 提供了的高层 API,并有一个经优化的支持通用执行图 • 近期文章 更多 文章目录 推荐作者 更多
__label__pos
0.876214
Add a local user account By adding a local user account, you can provide users with direct access to your ExtraHop appliances and restrict their access as needed by their role in your organization. To learn about default system user accounts, see Local users. 1. Log into the Admin UI on the Discover or Command appliance. 2. In the Access Settings section, click Users. 3. Click Add User. 4. In the Personal Information section, type the following information: Login ID: The username that users will log into their ExtraHop appliances with, which cannot contain any spaces. For example, adalovelace. Full Name: A display name for the user, which can contain spaces. For example, Ada Lovelace. Password: The password for this account, which must be a minimum of 5 characters. Confirm Password: Re-type the password from the Password field. 5. In the User Privileges section, select the desired privileges for the user. Note:For more information, see the User privileges section. 6. Click Save. Tip: • To modify settings for a user, click the username from the list to bring up the Edit user page. • To delete a user account, click the red X icon. If you delete a user from a remote authentication server, such as LDAP, you must also delete the entry for that user on the ExtraHop appliance. Published 2021-04-13 10:05
__label__pos
0.808159
build_json.py 1.81 KB Newer Older 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 #!/usr/bin/env python import sys import os.path as op import json import argparse def show_help(): """Show help of the build_json script for theia N2A products""" print "This script is used to build json configuration file use then to compute snow mask using OTB applications on Spot/LandSat/Sentinel-2 products from theia platform" print "Usage: python build_theia_json -s [landsat|s2|take5] -d image_directory -e srtm_tile -o file.json" print "python run_snow_detector.py help to show help" #----------------- MAIN --------------------------------------------------- def main(): """ Script to build json from theia N2A product""" parser = argparse.ArgumentParser(description='Build json from THEIA product') parser.add_argument("-s", help="select input sensors") parser.add_argument("-d", help="input dir") parser.add_argument("-o", help="input dir") parser.add_argument("-do", help="input dir") args = parser.parse_args() #print(args.accumulate(args.integers)) #Parse sensor if (args.s == 's2'): multi=10 #Build json file data = {} data["general"]={ "pout":args.do, "nodata":-10000, "ram":1024, "nb_threads":1, "generate_vector":"false", "preprocessing":"false", "log":"true", "multi":10 } data["cloud"]={ "shadow_mask":32, "all_cloud_mask":1, "high_cloud_mask":128, "rf":12, "red_darkcloud":500, "red_backtocloud":100 } data["snow"]={ "dz":100, "ndsi_pass1":0.4, "red_pass1":200, "ndsi_pass2":0.15, "red_pass2":120, "fsnow_lim":0.1, "fsnow_total_lim":0.001 } fp = open(args.o, 'w') fp.write(json.dumps(data,indent=4, sort_keys=True)) fp.close() if __name__ == "__main__": main()
__label__pos
0.997277
Run Java Tests With Maven Silently (Only Log on Failure) Last Updated:  February 4, 2022 | Published: January 7, 2022 When running our Java tests with Maven they usually produce a lot of noise in the console. While this log output can help understand test failures, it's typically superfluous when our test suite is passing. Nobody will take a look at the entire output if the tests are green. It's only making the build logs more bloated. A better solution would be to run our tests with Maven silent with no log output and only dump the log output once the tests fail. This blog post demonstrates how to achieve this simple yet convenient technique to run Java tests with Maven silently for a compact and followable build log. The Status Quo: Noisy Java Tests Based on our logging configuration, our tests produce quite some log output. When running our entire test suite locally or on a CI server (e.g., GitHub Actions or Jenkins), analyzing a test failure is tedious when there's a lot of noise in the logs. We first have to find our way to the correct position by scrolling or using the search functionality of, e.g., our browser or the integrated terminal of our IDE. A demo output for a test that verifies email functionality using GreenMail looks like the following: There's usually a lot of default noise of frameworks and test libraries that add up to quite some log output when running an entire test suite: While we could tweak our logger configuration and set the log level to ERROR for the framework and libraries logs, their INFO can still be quite relevant when analyzing a test failure. When scrolling through the log output of passing tests, we might also see stack traces and exceptions that are intended but might confuse newcomers as they wonder if something went wrong there. Having a clean build log without much noise would better help us follow the current build. The bigger our test suite, the more we have to scroll. If all tests pass, why pollute the console with log output from the tests? Our Maven build might also fail for different reasons than test failures, e.g., a failing OWASP dependency check or a dependency convergence issue. Getting fast to the root cause of the build failure is much simpler with a compact build log. The Goal: Run Tests with Maven Silently Our goal for this optimization is to have a compact Maven build log and only log the test output if it's really necessary (aka. tests are failing). Gradle is doing this already by default. When running tests with Gradle, we'll only see a test summary after running our tests. There's no intermediate noise inside our console. The goal is to achieve a somehow similar behavior as Gradle and run our tests silently. If they're passing, we're fine, and there's (usually) no need to investigate the log outcome of our tests. If one of our tests fails, report the build log to the console to analyze the test failure. In short, with our target solution, we have two scenarios: • No log output for tests in the console when all tests pass • Print the log output of our tests when a test fails The second scenario should be (hopefully) less likely. Hence most of our Maven builds should result in a compact and clean build log. We're fine with the log noise if there's a failure, as it helps us understand what went wrong. Let's see how we can achieve this with the least amount of configuration. The Solution: Customized Maven Setup As a first step, we configure the desired log level for testing: We're using Logback (any logger works) and log any INFO (and above) statement to the console for the example above. We don't differentiate between our application's log and framework or test libraries. Next comes the important configuration that'll make our test silent. The Maven Surefire (unit tests) and the Failsafe (integration tests) plugin allow redirecting the console output to a file. We won't see any test log output in the console with this configuration as it's stored within a file: When activating this functionality (redirectTestOutputToFile), both plugins create an output file inside the target folder for each test class with the naming scheme TestClassName-output.txt. We can override the location of the output files using the reportsDirectory configuration option. Overriding this location helps us store the output of the Surefire and Failsafe plugin at the same place: This configuration for bot the Surefire and Failsafe plugin will mute our test runs, and Maven will only display a test execution summary for each test class: This compact build log makes it even fun to watch the test execution (assuming there are no flaky tests). After running our tests, we can take a look at the content of the test-reports folder: For each test class, we'll find (at least) one text file that contains the test summary as we saw it in the build log. If the test prints output to the console, there'll be a -output.txt file with the content: • de.rieckpil.blog.greenmail.MailServiceTest-output.txt: All console output of the test • de.rieckpil.blog.greenmail.MailServiceTest.txt: The test summary, as seen in the build log What's left is to extract the content of all our *-output.txt files if our build is failing. As long as our tests are all green, we can ignore the content of the output files. In case of a test failure, we must become active and dump the file contents to the console. For this purpose, we're using a combination of find and tail.For demonstration purposes, we'll use GitHub Actions. However, the present solution is portable to any other build server that provides functionality to detect a build failure and execute shell commands: As part of the last step of our build workflow, we find all *-output.txt files and print their content. We only print the content of the test output files in case of a failure. With GitHub Actions, we can conditionally execute a step using a boolean expression: if: failure() || cancelled(). Both failure() and cancelled() are built-in functions of GitHub Actions. Every other CI server provides some similar functionality. We include cancelled() to the expression to cover the scenario when our test suite is stuck and we manually stop (aka. cancel) the build. If the build is passing, this last logging step is skipped, and no test log output is logged. By using tail -n +1 {} we print the file name before dumping its content to the console. This helps search for the failed test class to start the investigation: Summary: Silent and Compact Build Logs We'll get compact Maven build logs with this small tweak to the Maven Surefire and Failsafe plugin and the additional step inside our build server. No more noisy test runs. We won't lose any test log output as we temporarily park it inside files and inspect the files if a test fails. This configuration will only affect the way our tests are run with Maven. We can still see the console output when executing tests within our IDE. We'll capture any test console output with this mechanism, both from logging libraries and plain System.out.println calls. This technique also works when running our tests in parallel. However, if we parallelize the test methods, the console statements may be out of order inside the test output file. If you want to see this technique in action for a public repository, take a look at the Java Testing Toolbox repository on GitHub. As part of the main GitHub Actions workflow that builds the project(s) with Maven, you'll see the Java tests being run silently. If there's a build failure, you'll see the content of the test output files as one of the last jobs. Joyful testing, Philip {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"} >
__label__pos
0.662132
The World of Mayukh Bose << Back to Main Page Mayukh's world: e-mail me | about Mayukh's World: Operator Overloading With C++ Friday, July 01, 2016 Index • Introduction • What Operators? • Rules • Assignment Operator • More on Assignment • Arithmetic Operators • Arithmetic with Globals • Increment/Decrement • Operator-Assignment • Unary Operators • Relational Operators • Bitshift/Extraction • Subscript Operator • Function Call Operator • Bit and Logical Ops • Comma Operator • Pointer to Member • new and delete Ops • Credits and Thanks • My Free Software • Delphi/C++ Builder • Pocket PC • FreeSpeech Chat • C/C++ Freebies • Perl • Python • Ruby • C++ Operator Overloading Tutorial e-mail me What is Operator Overloading? Operator overloading is the ability for a language to redefine the way its operators behave for certain objects. It allows the programmer to extend the language and give it new abilities. Some languages such as C++, Algol, python and ruby allow operator overloading, and others such as Java deliberately leave it out. Operator overloading is a controversial subject for some -- anything that you can do with operator overloading, can also be accomplished by using appropriate functions and method calls. On the other hand, it may make your code easier to read and comprehend. It also enables the STL library to work elegantly. As it happens, C++ is a language that has a lot of operators. In the following pages, we will examine how to overload different operator types. As we will see later on, it is not necessary to overload all operators for a class, just the ones that we think should be overloaded. Also, C++ has some code in the standard library that reduces the amount of code that we need to write. For the purposes of this discussion, we will implement operator overloading on a complex number class that we will create. Yes, I know that standard C++ already defines a complex class and overloads the operators, but we will reinvent the wheel here and learn how operator overloading works at the same time. If you don't know what a complex number is, you can find the concepts in any basic algebra book. For now, suffice it to say that a complex number has a real and an imaginary part. You may also perform arithmetic and relational operations between complex numbers or between a complex and a real number. For the purpose of this discussion, we will start with a complex class that is declared like this: class Complex { private: double real, imag; public: Complex() { real = imag = 0; } Complex(double r, double i) { real = r; imag = i; } double GetReal(void) const { return real; } double GetImag(void) const { return imag; } }; ^Up to Mayukh's World^ Next: Overloading Rules >> Copyright © 2004 Mayukh Bose. All rights reserved. This work may be freely reproduced provided this copyright notice is preserved. OPTIONAL: Please consider linking to this website (http://www.mayukhbose.com/) as well.
__label__pos
0.830083
package HTML::FormHandler::Widget::Wrapper::Base; # ABSTRACT: common methods for widget wrappers use Moose::Role; use HTML::FormHandler::Render::Util ('process_attrs'); sub do_render_label { my ( $self, $result, $label_tag, $class ) = @_; $label_tag ||= $self->get_tag('label_tag') || 'label'; my $attr = $self->label_attributes( $result ); push @{ $attr->{class} }, @$class if $class; my $attrs = process_attrs($attr); my $label; if( $self->does_wrap_label ) { $label = $self->wrap_label( $self->label ); } else { $label = $self->get_tag('label_no_filter') ? $self->loc_label : $self->html_filter($self->loc_label); } $label .= $self->get_tag('label_after') if $label_tag ne 'legend'; my $id = $self->id; my $for = $label_tag eq 'label' ? qq{ for="$id"} : ''; return qq{<$label_tag$attrs$for>$label}; } sub wrap_checkbox { my ( $self, $result, $rendered_widget, $default_wrapper ) = @_; my $option_wrapper = $self->option_wrapper || $default_wrapper; if ( $option_wrapper && $option_wrapper ne 'standard' && $option_wrapper ne 'label' ) { unless ( $self->can($option_wrapper) ) { die "HFH: no option_wrapper method '$option_wrapper'"; } return $self->$option_wrapper($result, $rendered_widget); } else { return $self->standard_wrap_checkbox($result, $rendered_widget); } } sub standard_wrap_checkbox { my ( $self, $result, $rendered_widget ) = @_; return $rendered_widget if( $self->get_tag('no_wrapped_label' ) ); my $label = $self->get_checkbox_label; my $id = $self->id; my $for = qq{ for="$id"}; # use "simple" label attributes for inner label my @label_class = ('checkbox'); push @label_class, 'inline' if $self->get_tag('inline'); my $lattrs = process_attrs( { class => \@label_class } ); # return wrapped checkbox, either on left or right my $output = ''; if ( $self->get_tag('label_left') ) { $output = qq{\n$label\n$rendered_widget}; } else { $output = qq{$rendered_widget\n$label\n}; } if ( $self->get_tag('checkbox_element_wrapper') ) { $output = qq{ $output }; } return $output; } sub get_checkbox_label { my $self = shift; my $label = $self->option_label || ''; if( $label eq '' && ! $self->do_label ) { $label = $self->get_tag('label_no_filter') ? $self->loc_label : $self->html_filter($self->loc_label); } elsif( $label ne '' ) { $label = $self->get_tag('label_no_filter') ? $self->_localize($label) : $self->html_filter($self->_localize($label)); } return $label; } sub b3_label_left { my ( $self, $result, $rendered_widget ) = @_; my $label = $self->get_checkbox_label; my $id = $self->id; my $output = qq{ }; $output .= qq{}; $output .= qq{ }; return $output; } sub b3_label_left_inline { my ( $self, $result, $rendered_widget ) = @_; my $label = $self->get_checkbox_label; my $id = $self->id; my $output .= qq{}; return $output; } sub b3_label_right { my ( $self, $result, $rendered_widget ) = @_; my $label = $self->get_checkbox_label; my $id = $self->id; my $output = qq{ }; $output .= qq{}; $output .= qq{ }; return $output; } sub label_left { my ( $self, $result, $rendered_widget ) = @_; my $label = $self->get_checkbox_label; my $id = $self->id; my $output .= qq{}; return $output; } sub label_right { my ( $self, $result, $rendered_widget ) = @_; my $label = $self->get_checkbox_label; my $id = $self->id; my $output .= qq{}; return $output; } sub no_wrapped_label { my ( $self, $result, $rendered_widget ) = @_; return $rendered_widget; } # for compatibility with older code sub render_label { my $self = shift; my $attrs = process_attrs($self->label_attributes); my $label = $self->html_filter($self->loc_label); $label .= ": " unless $self->get_tag('label_no_colon'); return qq{$label}; } # this is not actually used any more, but is left here for compatibility # with user created widgets sub render_class { my ( $self, $result ) = @_; $result ||= $self->result; return process_attrs($self->wrapper_attributes($result)); } use namespace::autoclean; 1; __END__ =pod =encoding UTF-8 =head1 NAME HTML::FormHandler::Widget::Wrapper::Base - common methods for widget wrappers =head1 VERSION version 0.40057 =head1 DESCRIPTION Provides several common methods for wrapper widgets, including 'do_render_label' and 'wrap_checkbox'. Implements the checkbox 'option_wrapper' rendering: b3_label_left b3_label_right b3_label_left_inline label_left label_right no_wrapped_label =head1 NAME HTML::FormHandler::Widget::Wrapper::Base =head1 AUTHOR FormHandler Contributors - see HTML::FormHandler =head1 COPYRIGHT AND LICENSE This software is copyright (c) 2014 by Gerda Shank. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. =cut
__label__pos
0.999785
Launch Jitsi Meet App on Mobile from QR code?   android, ios, jitsi, jitsi-meet, qr-code I’m doing an event where some people will be walking around with mobile phones and at several locations there will be QR codes. The mobile phone users need to be able to scan the QR codes to get into a face-to-face chat with distant participants on laptops. I’m planning on using Jitsi Meet as the video chat application. As the people walking around will have registered in advance, I can ask them to install the Jitsi Meet app, which seems more reliable than whatever random web browsers they might have. They will likely be a mixture of iPhone and Android users. What I need to know is how to get from the QR code to the app. Do QR codes support a custom uri? What is the uri for jitsi meet? Is it the same across android and ios? Source: Android Questions LEAVE A COMMENT
__label__pos
0.673975
Sedona Framework in the Field Developed by Tridium, Inc, Sedona Framework™ is an open-source software environment designed to make it easy to build smart, networked, embedded devices which are well suited for implementing control applications. The Sedona language facilitates component-oriented programming where components are assembled onto a wire sheet, configured and interconnected, to create applications. The Sedona Framework trademark is owned by Tridium, Inc. but can be used by acknowledging the owner. More importantly, the Sedona Framework technology is available to the public under an Academic Free License (AFL 3.0) granted by the licensor—Tridium, Inc. A licensee is allowed worldwide, royalty-free and non-exclusive use of the technology. How are Sedona applications produced? Using a Sedona tool, components deployed in kits are assembled onto wire sheets creating applications that are executed by a Sedona device. The Sedona language is ideally suited for graphical representation of control strategies. It has a similar look-and-feel to the popular Niagara Framework™ and it is IP-based. Those with experience with Niagara Framework will have no problem understanding Sedona Framework. For those without Niagara experience, the graphical representation of components linked on a wire sheet to create applications is intuitive and can be easily learned with a minimum of training. What comprises a Sedona device? A Sedona device or Sedona controller consists of a Sedona Virtual Machine (SVM), a collection of kits that include components, and hardware in the form of a processor, memory and input/output circuitry that interfaces to real-world devices. On this platform the Sedona application resides. Sedona developers create the intricacies of the Sedona device while system integrators create applications that run on Sedona devices. About Sedona Framework What is a Sedona Virtual Machine? A Sedona Virtual Machine (SVM) is a small portable fast interpreter that can reside on most any hardware platform or operating system while executing a Sedona application. Depending upon the kits used by the Sedona application, it is possible to run the identical Sedona application on another SVM with a completely different hardware platform and operating system without modification. The original Tridium SVM has been modified by different developers in the Sedona community to run on different platforms such as limited-resource microcontrollers, Linux platforms, and powerful Windows workstations. In fact, SVMs have been developed for Raspberry Pi derivatives Grove Pi and Pi Face. SVMs are intended to operate over IP networks making Sedona attractive for Internet of Things (IoT). What is the role of the developer? A Sedona developer is either a hardware manufacturer or a software developer skilled in the use of Sedona Framework. Physical hardware such as CPU, memory and I/O need to be designed to become a Sedona device. The Sedona Virtual Machine must be modified to accommodate the hardware platform. A developer would want to visit the SedonaDev.org site to learn more about intricacies of Sedona Framework. Custom kits called hardware-dependent kits need to be developed that support the native functions of the platform. On this platform hardware-independent kits can be installed to provide more functionality. Once all elements are put together you will have a Sedona device awaiting an application. What is the role of the system integrator? The system integrator translates the required sequence of operation (SOO) into a Sedona application that executes the sequence. The integrator is skilled in creating applications which are created by extracting components from kits, placing them onto a wire sheet, configuring the components if necessary, and interconnecting the components with links. Because of the system integrators’ specialized application knowledge, the SI recommends to the developer any custom components that need to be developed that would improve effectiveness of applications. It is in the spirit of the community to share any custom hardware-independent kits. What is the difference between components and kits? Components are the fundamental building blocks for creating applications. However, components are deployed into a Sedona device in a container called a kit. Similar types of components are assigned to kits with relevant names such as Math, Logic, HVAC and so on. There are three types of kits: • Original Sedona 1.2 kits provided by Tridium available to all. • Custom hardware-independent kits by Sedona developers that can be shared. • Custom hardware-dependent kits by Sedona developers that cannot be shared. The spirit of the Sedona Community is to share kits if possible. Where do I find a Sedona tool? The original Sedona tool is Tridium’s Niagara Workbench 3.37 or 3.38 but with Sedona installed. Other Sedona tools are available from Sedona developers. The Sedona Alliance recommends the Sedona Application Editor (SAE) by community member Contemporary Controls for free by download. Included with the SAE download is a SVM that runs on a PC that can be used to program when evaluating Sedona. A link to SAE can be found under the Resources tab. What is there to like about Sedona? • The graphical experience of selecting components, configuring parameters, and linking components to create applications is easy to do and to explain to others • The technology is open source, royalty-free, and supported by several companies so the opportunity exists to share experiences • A community exists of users who create applications, and developers who make components and virtual machines • The technology is portable to other platforms and will run on a small micro-controller or a powerful computer • The opportunity exists to share in the exchange of custom components and kits within the community • Program debugging is fast because the effect of any change is seen instantly To learn more and join Sedona Alliance, please contact [email protected]    
__label__pos
0.881576
Search for a tool Regular Expression Simplificator Tool to simplify a regex. Regexp simplificator (or regular expression) shortens the string of characters to search for patterns in a text. Results Regular Expression Simplificator - Tag(s) : Data processing dCode and you dCode is free and its tools are a valuable help in games, puzzles and problems to solve every day! You have a problem, an idea for a project, a specific need and dCode can not (yet) help you? You need custom development? Contact-me! Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Regular Expression Simplificator tool. Thank you. Regular Expression Simplificator Sponsored ads Regular Expressions Simplification Tool to simplify a regex. Regexp simplificator (or regular expression) shortens the string of characters to search for patterns in a text. Answers to Questions How does the regexp minification works ? The regular expressions simplifier replaces useless elements in a regular expression in order to minimize it or make it more readable by analyzing patterns component of the regex string. Example: x{0,} is equivalent to x* Example: [aaabbb] is equivalent to [ab] Example: (ab|ac) can also be written a[bc] Some regular expressions can not be simplified. In this case, the program will return the same string. The program is in beta test, and does not work all the times! More, some parentheses, potentially useful for capturing can be deleted and escape characters can be ignored. How to reduce the size of a regexp? There are shorthand character classes and metacharacters: abbreviationequivalent \d[0-9] \w[A-Za-z0-9_] \s[ \t\r\n\f] \D[^\d] \W[^\w] \S[^\s] The letter d for digit (digit), w for word (letter / alphanumetic character) and s for space (spacing), uppercase letters represent the negation of the set Example: D for a character that is not a number, etc. Ask a new question Source code dCode retains ownership of the source code of the script Regular Expression Simplificator online. Except explicit open source licence (indicated Creative Commons / free), any algorithm, applet, snippet, software (converter, solver, encryption / decryption, encoding / decoding, ciphering / deciphering, translator), or any function (convert, solve, decrypt, encrypt, decipher, cipher, decode, code, translate) written in any informatic langauge (PHP, Java, C#, Python, Javascript, Matlab, etc.) which dCode owns rights will not be given for free. To download the online Regular Expression Simplificator script for offline use on PC, iPhone or Android, ask for price quote on contact page ! Questions / Comments Team dCode likes feedback and relevant comments; to get an answer give an email (not published). It is thanks to you that dCode has the best Regular Expression Simplificator tool. Thank you. Source : https://www.dcode.fr/regular-expression-simplificator © 2018 dCode — The ultimate 'toolkit' to solve every games / riddles / geocaches. dCode Feedback
__label__pos
0.726234
Quick Answer: What Is Android Statusbar? How do I find hidden apps on Android? Android 7.1From any Home screen, tap the Apps icon.Tap Settings.Tap Apps.Scroll through the list of apps that display or tap MORE and select Show system apps.If the app is hidden, ‘Disabled’ will be listed in the field with the app name.Tap the desired application.Tap ENABLE to show the app.. How do I turn on notification bar on Android? Open your phone’s Settings app. Notifications. Under “Lock screen,” tap Notifications on lock screen or On lock screen. Choose Show alerting and silent notifications. Where is the notification bar on my Android phone? The Notification Panel is a place to quickly access alerts, notifications and shortcuts. The Notification Panel is at the top of your mobile device’s screen. It is hidden in the screen but can be accessed by swiping your finger from the top of the screen to the bottom. It is accessible from any menu or application. What apps do cheaters use? The Five Apps That Cheaters UseInstagram. Suspicious your significant other is cheating on you? … Uber and Ridesharing Apps. Move over Uber Eats, welcome to Uber Cheats. … Snapchat. Snapchat is the pioneer of apps that allow for messages to disappear within seconds of sending them. … Vaulty Stocks. … Black SMS – Protected Texts. How do I change my status bar? Change Status Bar Colour for Individual Apps on Android PhoneOpen Material Status Bar app and tap on the Home icon located in the bottom menu. … On the next screen, tap on the app for which you want to change the status bar colour and from the drop-down, tap on Color. (More items… Is Systemui a virus? First, this file isn’t a virus. It is a system file used by android UI manager. So, if there is a small problem with this file, don’t consider it as a virus. Second, if you still think it’s a virus, install any antivirus app from playstore and try scanning your device. How do I get rid of Android notification bar? Navigate to Device Restrictions to disable the status bar on devices. Restrict the Status Bar option to disable the status bar on the device. By default the Status Bar expansion option is restricted, which disables the notification bar. What is Android SystemUI used for? SystemUI is a persistent process that provides UI for the system but outside of the system_server process. The starting point for most of sysui code is a list of services that extend SystemUI that are started up by SystemUIApplication. How do I customize my status bar? How to Customize the Status Bar on Android (Without Rooting)Step One: Install Material Status Bar and Grant It Permissions. Download and install the app from the Play Store, find it in your app drawer and open it. … Step Two: Customize the Status Bar. The main menu of the app has a few options, so let’s run through them. … Step Three: Get Rid of Ads with the Paid Version (Optional) What is Android hidden menu? It’s called the System UI Tuner and it can be used for customizing an Android gadget’s status bar, clock and app notification settings. Introduced in Android Marshmallow, this experimental menu is hidden but it’s not difficult to find. Once you get to it, you’ll wish you knew about it sooner. Where is my status bar? The Status bar, located at the bottom of the screen, allows you to enable data tracking, navigate the report, modify screen magnification, and refresh report data. What is the pull down menu on Android called? To find the Android Quick Settings menu, just drag your finger from the top of your screen downward. If your phone is unlocked, you’ll see an abbreviated menu (the screen to the left) that you can either use as-is or drag down to see an expanded quick settings tray (the screen to the right) for more options. What do hidden apps look like on Android? From the app drawer, tap the three dots in the upper-right corner of the screen. Tap Hide apps. The list of apps that are hidden from the app list displays. If this screen is blank or the Hide apps option is missing, no apps are hidden. How can I make my status bar transparent in Android? Use the following tag in your app theme to make the status bar transparent: @android:color/transparentAnd then use this code in your activity’s onCreate method. View decorView = getWindow(). getDecorView(); decorView. setSystemUiVisibility(View. Why has my status bar disappeared? The status bar being hidden may be in Settings>Display, or in the launcher settings. Settings>Launcher. You can try downloading a launcher, like Nova. That may force the status bar back. What is a status bar in Android? The status bar is at the top of the display, on the right. The time, the battery status and current connections like Bluetooth and Wi-Fi are displayed here. On the left side of this strip, you’ll find app icons to alert you to new messages, updates to the Play Store, and other notifications. How do I get rid of status bar? Hide the Status Bar on Android 4.1 and Higher // status bar is hidden, so hide that too if necessary. View decorView = getWindow(). getDecorView(); // Hide the status bar. What is the best secret texting app? So, if confidentiality is critical for your communication, then check out this list of some best encrypted messaging apps for Android and iOS platforms….Signal Private Messenger. … Telegram. … 3 iMessage. … Threema. … Wickr Me – Private Messenger. … Silence. … Viber Messenger. … WhatsApp.More items…• What does a status bar look like? A status bar is a graphical control element used to display certain status information depending upon the application or device. It is usually displayed as a horizontal bar at the bottom of the application window on computers, or along the top of the screen for tablets and smartphones. What is the UI system on a Samsung phone? We received a few complaints from Samsung Galaxy S7 owners regarding the error message “Unfortunately, System UI has stopped.” The System UI is actually an Android service that handles the front-end of the system including launchers, home screens, wallpapers, themes and skins.
__label__pos
0.967571
SVG to PNG – Convert SVG to PNG in C# Programmatic file conversion SVG stands for Scalable Vector Graphics which defines vector-based graphics for the Web· SVG is an XML-based vector image format for two-dimensional graphics with support for interactivity and animation. It is popular for rendering two-dimensional images on the internet where images and can scale to any size. But in case we have a requirement to convert SVG to PNG for lossless compression where it doesn’t lose detail and quality after image compression, then Aspose.Imaging Cloud is a programmatic solution. Image Processing API Aspose.Imaging Cloud is our programming solution to image processing requirements. Perform all operations including resizing, cropping, rotating, scaling, flipping, searching, exporting images to other supported file formats. As the API is built as per REST architecture, so it can be accessed on any platform. The APIs empowers you to incorporate image processing capabilities within Desktop, Web, Mobile, and Cloud-based applications. Now in order to further facilitate our customers, we have created programming language-specific SDKs as a wrapper around REST APIs. So you can get all the benefits/features of Cloud API within the programming language of your choice. But before proceeding further, the first step is the installation of SDK on the local system. Please visit the following link to learn more about How to install Aspose.Cloud SDKs. Convert SVG to PNG in C# Please follow the instructions below to convert an SVG image already available in Cloud storage to PNG format. • The first step is to create an instance of ImagingApi while passing ClientID and ClientSecret details as arguments • Secondly, upload the SVG image to Cloud storage using UploadFile(..) method of ImagingApi • Thirdly, create an instance of ConvertImageRequest class while passing name of input SVG and resultant format as arguments • Now call the ConvertImage(..) method to perfrom the conversion operation. The resultant PNG is returned as Stream isntance • Finally, call the custom method using File.Create to save Stream instance as file on local drive For your reference, the sample images used in above example can be dowbloaded from trashloader2.svg and Converted.png. SVG to PNG Conversion preview Image 1:- SVG to PNG conversion preview. SVG to PNG using cURL Like all REST APIs, Aspose.Imaging Cloud can also be accessed via cURL commands. However, in order to ensure data integrity and privacy, you need to generate JWT access token based on client credentials. Please execute the following command to generate one: curl -v "https://api.aspose.cloud/connect/token" \ -X POST \ -d "grant_type=client_credentials&client_id=4ccf1790-accc-41e9-8d18-a78dbb2ed1aa&client_secret=caac6e3d4a4724b2feb53f4e460eade3" \ -H "Content-Type: application/x-www-form-urlencoded" \ -H "Accept: application/json" Now execute the following cURL command to convert SVG files already available in Cloud storage to PNG format. The result is returned as a response stream and can be saved to a local drive. curl -X GET "https://api.aspose.cloud/v3.0/imaging/trashloader2.svg/convert?format=png" \ -H "accept: application/json" \ -H "authorization: Bearer <JWT Token>" \ -o Converted.png In case you have a requirement to convert SVG image passed as zero-indexed multipart/form-data content or as raw body stream. curl -X POST "https://api.aspose.cloud/v3.0/imaging/convert?format=png" \ -H "accept: application/json" \ -H "authorization: Bearer <JWT Token>" \ -H "Content-Type: multipart/form-data" \ -d {"imageData":{}} \ -o Converted.png Conclusion We have discussed the image conversion capabilities being offered by Aspose.Imaging Cloud. Apart from accessing API through either of the approaches mentioned above, it can also be accessed via Swagger interface where you can test the API within the web browser. Also please note that the Cloud SDKs are developed under MIT license, so a complete source code can be downloaded from GitHub. In case you encounter any issue while using the API or you have any related query, please feel free to contact us via free product support forum. Related Links We recommend visiting the following links to learn more about
__label__pos
0.724512
Wiki Clone wiki SCons / FltkFluidBuilder The FltkFluidBuilder creates C++ source and header files from Fluid's .fl files. Fluid is the graphical GUI editor of the GUI toolkit FLTK. Simply add the .fl files to the list of sources to be compiled. #!python sources = Split("""main.cpp UserInterface.fl""") The builder will create two files: fluid_UserInterface.h and fluid_UserInterface.cxx and adds them to the list of sources to be built. Here's the builder and emitter + registration: #!python import os import Scons.Util # emitter to add the generated .h file to the dependencies def fluidEmitter(target, source, env): adjustixes = SCons.Util.adjustixes file = SCons.Util.splitext(str(source[0].name))[0] file = os.path.join(str(target[0].get_dir()), file) target.append(adjustixes(file, "fluid_", ".h")) return target, source fluidBuilder = Builder(action = "cd ${SOURCE.dir} && " + "fluid -o fluid_${SOURCE.filebase}.cxx " + "-h fluid_${SOURCE.filebase}.h -c ${SOURCE.name} ", emitter = fluidEmitter, src_suffix = '.fl', suffix = '.cxx', prefix = 'fluid_') # register builder env.Append( BUILDERS = { 'Fluid': fluidBuilder } ) # add builder to the builders for shared and static objects, # so we can use all sources in one list shared, static = SCons.Tool.createObjBuilders(env) shared.src_builder.append('Fluid') static.src_builder.append('Fluid') -- hirsch 2006-08-11 10:54:26 Updated
__label__pos
0.986866
DATA Step, Macro, Functions and more Does anyone know how to eliminate scientific notation from proc compare list file. Reply Contributor Posts: 48 Does anyone know how to eliminate scientific notation from proc compare list file. First of all thanks for reading my question. Here is what I want to do: I want to know the number of observation changed between two identical datasets for each variable. I want to create an excel report showing the variable name and the number of differences. Here is what I’m doing: I used proc compare to get the number of differences (Ndif). I created a sas datasets using ODS and extract the numbers from the dataset using data step. Here is the problem: Proc compare is generating scientific notation for  Number of differences (Ndif) ?  How can I have proc compare to display the full number. NOTE: Both datasets have 8 million observation and 500 variables.  I tried to use merge statement but it wasn’t efficient. Here is a sample code: Example:  ods listing close ; ods output compsum=outputsum; proc compare base=work.dsn1 compare=work.dsn2                                 maxprint=(100,500) novalues  listequalvar;    id loan_key; run; ods output close ; ods listing ; data outdir.summary;   set indir.outputsum;    length Field_Name $40;    batch=left(compbl(batch));    if (index(batch, 'NUM') gt 0 or index(batch, 'CHAR') gt 0) and type ne 'h' then    do;          if countw(batch, ' ') = 3 or countw(batch, ' ') = 4 then &curr_count. = 0;          if scan(batch, 2) eq 'NUM'  and countw(batch, ' ') = 7 then &curr_count. = scanq(batch, 5) ;          if scan(batch, 2) eq 'NUM'  and countw(batch, ' ') = 6 then &curr_count. = scanq(batch, 4);          if scan(batch, 2) eq 'CHAR' and countw(batch, ' ') = 6 then &curr_count. = scanq(batch, 5);          if scan(batch, 2) eq 'CHAR' and countw(batch, ' ') = 5 then &curr_count. = scanq(batch, 4);    output;    end; keep Field_Name &curr_count.; run; Thanks in advance Super User Posts: 13,498 Re: Does anyone know how to eliminate scientific notation from proc compare list file. In the dataset step you can change the format for the variables. Currently it is likely that the NDIF has a format like best8. Try format ndif f16.0; However you may have enough differences that NDIF is actually trying to exceed SAS precision. Ask a Question Discussion stats • 1 reply • 2004 views • 0 likes • 2 in conversation
__label__pos
0.50553
There is an urgent need for industries to adapt themselves to the blockchain technology. The distributed ledger will make centralized applications obsolete. The Gartner Hype Cycle of 2017, predicts that blockchain will be part of our lives in the next 2 – 3 years. As we move closer to the day that blockchain is no longer the catchline, but a cliche. However, the new buzzword in town is DApps. Naturally, the first question is what is DApps? Though the concept has only emerged in the past year, we are yet to come up with an exact definition of the word Decentralized Applications. However, there are specific unique features to DApps. 1. Open Source: Governed by autonomy and decision making is consensus-based, the code is available to scrutiny as well as improvisation. 2. Decentralized: All the records are stored in the decentralized blockchain that is public, avoiding demerits of centralization. 3. Incentivize: Those who validate the blockchain must be incentivized through cryptographic tokens. 4. Protocol: The application community must agree on Proof of Work and Proof of Stake. 5. Profitable: The decentralized applications are beneficial for both creators and users. Maybe you might think, DApp is a fancy word for building an application on top of the blockchain. In specific ways, DApp is just a startup with a blockchain. However, a DApp is an architecture and an infrastructure of decentralised servers on top of a blockchain, mainly the Ethereum blockchain. A DApp might have several smart contracts handling different functions. Why Ethereum’s Blockchain? The answer is simple. Ethereum blockchain acts as a decentralized computing platform with an interplanetary filing system, a decentralized storing system and a layer of intelligence with deep learning. So what does this do? DApps can achieve any task if the data set is correct and can be scaled according to the requirements. How can we create a DApp? The following are the tools required to make your own Decentralized App with JavaScript : 1. TestRPC (https://github.com/trufflesuite/ganache-cli/wiki/Installing-TestRPC-on-Windows) 2. Truffle Framework (http://truffleframework.com/) 3. Web3js API (https://web3js.readthedocs.io/en/1.0/getting-started.html) The DApps can be tested in an isolated framework called Ethereum Virtual Machine(EVM). Any company wants to create a smart contract can use EVM, without any changes in the main blockchain operations. The Ethereum node in the network is powered by the EVM implementation and have the capacity to execute instructions in the smart contract. DApps on the EVM will be the gateway to building a reliable smart contract for newbies and adept coders and developers who want to learn the Solidity Language. There is no denying the potential of the technology, what is to be understood that DApps is still an up and coming technology. We cannot argue against the DApps and Blockchain, and the biggest asset of Ethereum and EVM is that it is a free for all coding fanatics. Share to • 1 •   • 19 •   •   •   •   •   20 Shares Talk To Our Experts To hire the top blockchain experts from Blockchain App Factory send us your requirement and other relevant details via the form attached underneath.
__label__pos
0.865706
Make your dreams come true love what you do. Forums z-index issue? • # November 13, 2008 at 12:17 pm here is the url – magnets-on-sale.com need assistance in figuring out how to get that drop down menu to sit on top of that flash container right below it. right now it shows underneath it when mouseover on main menu below are the links to the two stylesheets working on this page main.css – styles elements minus the nav Code: @charset “utf-8″; /* CSS Document */ html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, font, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td { margin: 0; padding: 0; border: 0; outline: 0; font-weight: inherit; font-style: inherit; font-size: 100%; font-family: inherit; vertical-align: baseline; position:relative; } /* remember to define focus styles! */ :focus { outline: 0; } body { line-height: 1; color: black; background: #fff; } /* tables still need ‘cellspacing=”0″‘ in the markup */ table { border-collapse: separate; border-spacing: 0; } caption, th, td { text-align: left; font-weight: normal; } blockquote:before, blockquote:after, q:before, q:after { content: “”; } blockquote, q { quotes: “” “”; } #wrapper { width: 960px; position:relative; margin: 0 auto; } #contact { height: 41px; margin-bottom:10px; margin-top: 5px; } img.contact { float: right; margin-top: 5px; } #scroller { padding-top: 10px; } #header { background: url(images/header-background.gif) repeat-x; background-position:top left; height: 288px; border-top: 1px solid #ccc; border-right: 1px solid #ccc; border-left: 1px solid #ccc; } img.logo { position: absolute; top: 50px; } img.display { position: absolute; top: 42px; right: 0px; } #tag-line { background:url(images/bot-header.gif) no-repeat; height: 45px; width: 958px; border-right: 1px solid #ccc; border-left: 1px solid #ccc; padding-bottom: 5px; } #maincontent { background: #FFFFFF; color: #000000; padding: 15px; border-right: 1px solid #ccc; border-left: 1px solid #ccc; } #maincontent p { font-family:Arial, Helvetica, sans-serif; font-size:14px; line-height: 1.5em; text-align: justify; } #maincontent a{ color: #000; text-decoration:none; border-bottom: 1px dotted #000; } #maincontent a:hover{ color: #000; background: #B7B7B7; text-decoration: none; } #maincontent h1{ font-family:Arial, Helvetica, sans-serif; font-size: 20px; font-weight:bold; margin-bottom: 25px; color: #083bb0; } #maincontent h2{ font-family:Arial, Helvetica, sans-serif; font-size: 20px; font-weight:bold; margin: 25px 0; color: #083bb0; } #maincontent h3{ font-family:Arial, Helvetica, sans-serif; font-size: 20px; font-weight:bold; margin: 60px 0 15px 0; color: #083bb0; } #gallery { background: #FFFFFF; color: #000000; padding: 15px 0px 10px 15px; border-right: 1px solid #ccc; border-left: 1px solid #ccc; } #gallery p{ font-family:Arial, Helvetica, sans-serif; font-size:14px; line-height: 1.5em; text-align: justify; } #gallery h2{ font-family:Arial, Helvetica, sans-serif; font-size: 20px; font-weight:bold; margin-bottom: 25px; color: #083bb0; } #gallery h1{ font-family:Arial, Helvetica, sans-serif; font-size: 20px; font-weight:bold; margin: 25px 0 10px 0; color: #083bb0; } img.percentages { float: right; margin: 60px 0 0 10px; } #footer { background:#ba1c1d; height: 325px; width: 960px; border-top: 6px solid #961818; margin-right: auto; margin-left: auto; } /* – - – ADxMenu: BASIC styles [ MANDATORY ] – - – */ /* remove all list stylings */ .menu, .menu ul { margin: 0; padding: 0; list-style-type: none; display: block; } .menu li { margin: 0; padding: 0; border: 0; display: block; float: left; /* move all main list items into one row, by floating them */ position: relative; /* position each LI, thus creating potential IE.win overlap problem */ z-index: 5; /* thus we need to apply explicit z-index here… */ } .menu li:hover { z-index: 2; /* …and here. this makes sure active item is always above anything else in the menu */ white-space: normal;/* required to resolve IE7 :hover bug (z-index above is ignored if this is not present) see http://www.tanfa.co.uk/css/articles/pure-css-popups-bug.asp for other stuff that work */ } .menu li li { float: none;/* items of the nested menus are kept on separate lines */ } .menu ul { visibility: hidden; /* initially hide all submenus. */ position: absolute; z-index: 10; left: 0; /* while hidden, always keep them at the top left corner, */ top: 0; /* to avoid scrollbars as much as possible */ } .menu li:hover>ul { visibility: visible; /* display submenu them on hover */ top: 100%; /* 1st level go below their parent item */ } .menu li li:hover>ul { /* 2nd+ levels go on the right side of the parent item */ top: 0; left: 100%; } /* — float.clear – force containment of floated LIs inside of UL */ .menu:after, .menu ul:after { content: “.”; height: 0; display: block; visibility: hidden; overflow: hidden; clear: both; } .menu, .menu ul { /* IE7 float clear: */ min-height: 0; } /* — float.clear.END — */ /* — sticky.submenu – it should not disappear when your mouse moves a bit outside the submenu YOU SHOULD NOT STYLE the background of the “.menu UL” or this feature may not work properly! if you do it, make sure you 110% know what you do */ .menu ul { background-image: url(empty.gif); /* required for sticky to work in IE6 and IE7 – due to their (different) hover bugs */ padding: 10px 30px 30px 30px; margin: -10px 0 0 -30px; /*background: #f00;*/ /* uncomment this if you want to see the “safe” area. you can also use to adjust the safe area to your requirement */ } .menu ul ul { padding: 30px 30px 30px 10px; margin: -30px 0 0 -10px; } /* — sticky.submenu.END — */ /* – - – ADxMenu: DESIGN styles [ OPTIONAL, design your heart out :) ] – - – */ .menu, .menu ul li { color: #fff; background: url(images/nav-background.gif) repeat-x; font-family: arial,trebuchet ms; font-size: 90%; font-weight: bold; padding-left: 15px; } .menu ul { width: 14.5em; } .menu a { text-decoration: none; color: #fff; padding: 1.2em 1.2em; display: block; position: relative; } .menu a:hover, .menu li:hover>a { color: #fff; background:url(images/nav-hover.gif) repeat-x; } .menu li li { /* create borders around each item */ background: #1b297e; font-family: arial,trebuchet ms; font-size: 100%; padding-left: 0px; } .menu ul>li + li { /* and remove the top border on all but first item in the list */ border-top: 0; } .menu li li a:hover{ /* create borders around each item */ background: #ba1c1d; } .menu li li:hover>ul { /* inset 2nd+ submenus, to show off overlapping */ top: 5px; left: 90%; } /* special colouring for “Main menu:”, and for “xx submenu” items in ADxMenu placed here to clarify the terminology I use when referencing submenus in posts */ .menu>li:first-child>a { color: #fff; } /* – - – ADxMenu: IE6 BASIC styles [MANDATORY] – - – */ /* this rules improves accessibility – if Javascript is disabled, the entire menu will be visible of course, that means that it might require different styling then. in which case you can use adxie class – see: aplus.co.yu/adxmenu/examples/ie6-double-style/ */ .menu ul { visibility: visible; position: static; } .menu, .menu ul { /* float.clear */ zoom: 1; } .menu li.adxmhover { z-index: 10000; } .menu .adxmhoverUL { /* li:hover>ul selector */ visibility: visible; } .menu .adxmhoverUL { /* 1st-level submenu go below their parent item */ top: 100%; left: 0; } .menu .adxmhoverUL .adxmhoverUL { /* 2nd+ levels go on the right side of the parent item */ top: 0; left: 100%; } /* – - – ADxMenu: DESIGN styles – - – */ .menu ul a { /* fix clickability-area problem */ zoom: 1; } .menu li li { /* fix white gap problem */ float: left; width: 100%; } .menu li li { /* prevent double-line between items */ margin-top: -1px; } .menu a:hover, .menu .adxmhoverA { /* li:hover>a selector */ background: #ba1c1d; } .menu .adxmhoverUL .adxmhoverUL { /* inset 2nd+ submenus, to show off overlapping */ top: 5px; left: 90%; } /* Fix for IE5/Mac *//*/ .menu a { float: left; } /* End Fix */ /*]]>*/ ul.items{ font-family: arial, trebuchet ms,sans-serif; font-size: 14px; list-style-type: disc; list-style-position:inside; line-height: 1.5em; margin: 15px 30px; } span.highlight{ color: #FF0000; } img.options{ float: left; margin: 0 10px 10px 0; } #options { height: 160px; } #size-desc { height: 40px; } #sizes { height: 280px; } #measurements { width:930px; } img.inline { float: right; margin: 5px 0 5px 10px; } img.quote { float: right; margin: 5px 20px 5px 10px; } img.inline-bot { float: right; margin: 20px 0 10px 5px; } #domticker{ width: 940px; height: 25px; background-color: #ba1c1d; font-family:Arial, Helvetica, sans-serif; font-size:20px; font-weight:bold; font-style:italic; color: #ffffff; text-align: center; text-transform:uppercase; } em{ color: #000000; font-weight: bold; } #domticker div{ /*IE6 bug fix when text is bold and fade effect (alpha filter) is enabled. Style inner DIV with same color as outer DIV*/ background-color: #ba1c1d; } #domticker a{ font-weight: bold; color: #ffffff; text-decoration: none; } #domticker a:hover{ color: #000000; font-weight: bold; } #domticker2{ width: 940px; height: 37px; padding: 3px; } #domticker2 a{ text-decoration: none; } .someclass{ //class to apply to your scroller(s) if desired } #footer-logo { position: absolute; bottom: 10px; left: 370px; } #footer-content{ text-align:center; width: 930px; padding: 20px 20px 10px 10px; } #footer-content p{ font-family:Arial, Helvetica, sans-serif; font-size: 12px; font-weight: bold; line-height: 1.5em; } #footer-content a{ text-decoration: underline; color: #000000; } #footer-content a:hover{ text-decoration: none; color: #000000; } #pricing-info { clear:both; } #containing-box{ margin-top: 30px; text-align: center; } #content-box { width:930px; } img.front { margin:10px 0 5px 10px; float: right; } #flash-container { width: 935px; height: 335px; margin-bottom:15px; z-index: 9998; } #mssHolder { z-index: 9998; } chrometheme/chromestyle.css – controls/styles the nav Code: .chromestyle{ width: 960px; height: 60px; background: url(../images/chromebg.gif) repeat-x; } .chromestyle:after{ /*Add margin between menu and rest of content in Firefox*/ content: “.”; display: block; height: 0; clear: both; visibility: hidden; } .chromestyle ul{ width: 960px; height: 60px; padding: 21px 4px; margin: 0; text-align: center; /*set value to “left”, “center”, or “right”*/ } .chromestyle ul li{ display: inline; } .chromestyle ul li a{ color: #ffffff; padding: 21px 13px; margin: 0; text-decoration: none; font-weight: bold; font-family: arial,verdana, sans-serif; font-size: 14px; } .chromestyle ul li a:hover, .chromestyle ul li a.selected{ /*script dynamically adds a class of “selected” to the current active menu item*/ text-decoration: underline; color: #fff;/*THEME CHANGE HERE*/ } #chromemenu ul .contact a { background-image: none; } /* ######### Style for Drop Down Menu ######### */ .dropmenudiv{ position:absolute; top: 0; border-bottom-width: 0; line-height:18px; z-index:9999; background-color: #1c2a7f; width: 230px; visibility: hidden; filter: progid:DXImageTransform.Microsoft.Shadow(color=#CACACA,direction=135,strength=4); /*Add Shadow in IE. Remove if desired*/ } .dropmenudiv a{ width: auto; display: block; padding: 10px 15px; margin: 0; text-decoration: none; font-weight: bold; font-family: arial,verdana, sans-serif; font-size: 14px; color: #ffffff; z-index:9999; } * html .dropmenudiv a{ /*IE only hack*/ width: 100%; } .dropmenudiv a:hover{ /*THEME CHANGE HERE*/ text-decoration: underline; background: #132072; z-index:9999; } Thanks # November 13, 2008 at 12:21 pm just realized w/o doing much browser checking that the issue seems to be in FF 2 & 3 as of right now. IE 6 & 7 show it working properly. so i guess the issue is narrowed down somewhat # November 13, 2008 at 2:09 pm Ok I can’t really look at the code now and I did this at work a while ago but I can’t find the test files when I was experimenting with it. What you gotta do though is make the flash windowmode transparent, easiest way is in flash when you publish it. You then gotta put it in a div and make it.. I think -1 z-index. then in the css for the drop downs set it as 2 for zindex. Again I will look for the file and have a look at the code and get you something when I get home. But there’s a start… Catch you soon # November 13, 2008 at 2:12 pm Yeah I remember doing something like this before. I think all you have to do is add "wmmode=transparent" to the tag where the Flash file is embedded. # November 19, 2008 at 4:15 pm add a parameter when you load the flashfile on the page setting wmode to either opaque or transparent.. opaque is easier on the processor, so I favor that personally. Other than that, just remember that z-index only works on absolute & relatively positioned elements. :) I’m working on a site currently that uses a transparent png as a frame that sits over the top of a flash slideshow, and a dropdown menu with transparent pngs that drop down over the top of both the flash & png frame. # November 20, 2008 at 12:01 pm "tcindie" wrote: I’m working on a site currently that uses a transparent png as a frame that sits over the top of a flash slideshow, and a dropdown menu with transparent pngs that drop down over the top of both the flash & png frame. Sounds like an absolute nightmare to get that to work in IE6 :lol: # November 20, 2008 at 4:26 pm "daGUY" wrote: Sounds like an absolute nightmare to get that to work in IE6 :lol: Actually, the only thing I’m struggling with at the moment is that the dropdowns aren’t working at all in IE, but I haven’t put much work into it yet either.. the overlays and such are working just fine.. Most likely I’ll end up with some custom code that only runs for IE browsers. *shrug* no big whoop. ;) Viewing 7 posts - 1 through 7 (of 7 total) You must be logged in to reply to this topic.
__label__pos
0.985411
Commit 39f77bef authored by Guillaume Melquiond's avatar Guillaume Melquiond Browse files Remove some obsolete conversions between machine integers and unbounded integers. parent e9366834 ......@@ -90,25 +90,25 @@ module BinarySearchInt32 exception Not_found (* raised to signal a search failure *) let binary_search (a : array int32) (v : int32) : int32 requires { forall i1 i2 : int. 0 <= i1 <= i2 < to_int a.length -> to_int a[i1] <= to_int a[i2] } ensures { 0 <= to_int result < to_int a.length /\ a[to_int result] = v } requires { forall i1 i2 : int. 0 <= i1 <= i2 < a.length -> a[i1] <= a[i2] } ensures { 0 <= result < a.length /\ a[result] = v } raises { Not_found -> forall i:int. 0 <= i < to_int a.length -> a[i] <> v } forall i:int. 0 <= i < a.length -> a[i] <> v } = "vc:sp" let l = ref (of_int 0) in let u = ref (length a - of_int 1) in let l = ref 0 in let u = ref (length a - 1) in while !l <= !u do invariant { 0 <= to_int !l /\ to_int !u < to_int a.length } invariant { forall i : int. 0 <= i < to_int a.length -> a[i] = v -> to_int !l <= i <= to_int !u } variant { to_int !u - to_int !l } let m = !l + (!u - !l) / of_int 2 in assert { to_int !l <= to_int m <= to_int !u }; invariant { 0 <= !l /\ !u < a.length } invariant { forall i : int. 0 <= i < a.length -> a[i] = v -> !l <= i <= !u } variant { !u - !l } let m = !l + (!u - !l) / 2 in assert { !l <= m <= !u }; if a[m] < v then l := m + of_int 1 l := m + 1 else if a[m] > v then u := m - of_int 1 u := m - 1 else return m done; ...... ......@@ -14,7 +14,7 @@ <proof prover="0"><result status="valid" time="0.03"/></proof> </goal> </theory> <theory name="BinarySearchInt32" proved="true" sum="c9f0cf881df8be787d964ba3de51306f"> <theory name="BinarySearchInt32" proved="true" sum="0aeaeda1d3225a20418f2d07e713896a"> <goal name="VC binary_search" expl="VC for binary_search" proved="true"> <proof prover="0"><result status="valid" time="0.10"/></proof> </goal> ...... Markdown is supported 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.963541
Outil de recherche WordPress pour les développeurs et auteurs de thèmes validate_file_to_edit › Depuis1.5.0 Dépréciéen/a validate_file_to_edit ( $file, $allowed_files = array() ) Paramètres: (2) • (string) $file File the user is attempting to edit. Requis: Oui • (string[]) $allowed_files Optional. Array of allowed files to edit. `$file` must match an entry exactly. Requis: Non Défaut: array() Retourne: • (string|void) Returns the file name on success, dies on failure. Défini(e) dans: Codex: Makes sure that the file that was requested to be edited is allowed to be edited. Function will die if you are not allowed to edit the file. Source function validate_file_to_edit( $file, $allowed_files = array() ) { $code = validate_file( $file, $allowed_files ); if ( ! $code ) { return $file; } switch ( $code ) { case 1: wp_die( __( 'Sorry, that file cannot be edited.' ) ); // case 2 : // wp_die( __('Sorry, can&#8217;t call files with their real path.' )); case 3: wp_die( __( 'Sorry, that file cannot be edited.' ) ); } }
__label__pos
0.991945
Jpa query sub filter with hibernate search #1 I need to combine a JPA query with hibernate search, for example: JPA Query: " SELECT p FROM Product p WHERE p.cat = 'cars' " Then over the results of the JPA Query I need sub filter with the hibernate search query: queryBuilder .keyword() .onFields("productName", "description") .matching(text) .createQuery(); fullTextEntityManager.getResultList(); (list of results must be only on the result of the JPA query) #2 I’d strongly recommend rather indexing “cat” and doing the whole query in Hibernate Search. Any other solution is bound to bring significant problems. That being said, if you really need to execute one query on the JPA side and one on the Hibernate Search side, you can always do that manually and combine the results. For example you could select the entity IDs in the JPA query, then add a predicate to your HSearch query: List<Long> idsFromJpaQuery = entityManager.createQuery( "SELECT p.id FROM Product p WHERE p.cat = 'cars' ", Long.class ).getResultList(); Query textQuery = queryBuilder.keyword() .onFields("productName", "description") .matching(text) .createQuery(); BooleanJunction idJunction = queryBuilder.bool(); for ( Long id : idsFromJpaQuery ) { idJunction.should( queryBuilder.keyword().onField( "id" ).matching( id ).createQuery() ); } Query idQuery = idJunction.createQuery(); Query combinedQuery = queryBuilder.bool() .must( textQuery ) .must( idQuery ) .createQuery(); FullTextQuery query = fullTextEntityManager.createFullTextQuery( combinedQuery, Product..class ); List<Product.> results = query.getResultList(); As to why this feature is not available in Hibernate Search directly: The problem with combining queries from different data sources is that it performs very badly, in particular if you need to paginate the result. Basically, if you need to reach page 30, you will have to fetch the results of more than 30 pages from both queries, then compute the intersection of the result sets inside your application, possibly fetching some more pages from each query because the intersection didn’t produce enough results to reach the 30th page. Really, really bad performance. More often than not, you’re better off indexing some more fields. #3 Thanks a lot! I just wonder if it was possible, again thank you very much!
__label__pos
0.85168
go-ethereum: github.com/ethereum/go-ethereum/internal/cmdtest Index | Files package cmdtest import "github.com/ethereum/go-ethereum/internal/cmdtest" Index Package Files test_cmd.go type TestCmd Uses type TestCmd struct { // For total convenience, all testing methods are available. *testing.T Func template.FuncMap Data interface{} Cleanup func() // Err will contain the process exit error or interrupt signal error Err error // contains filtered or unexported fields } func NewTestCmd Uses func NewTestCmd(t *testing.T, data interface{}) *TestCmd func (*TestCmd) CloseStdin Uses func (tt *TestCmd) CloseStdin() func (*TestCmd) ExitStatus Uses func (tt *TestCmd) ExitStatus() int ExitStatus exposes the process' OS exit code It will only return a valid value after the process has finished. func (*TestCmd) Expect Uses func (tt *TestCmd) Expect(tplsource string) Expect runs its argument as a template, then expects the child process to output the result of the template within 5s. If the template starts with a newline, the newline is removed before matching. func (*TestCmd) ExpectExit Uses func (tt *TestCmd) ExpectExit() ExpectExit expects the child process to exit within 5s without printing any additional text on stdout. func (*TestCmd) ExpectRegexp Uses func (tt *TestCmd) ExpectRegexp(regex string) (*regexp.Regexp, []string) ExpectRegexp expects the child process to output text matching the given regular expression within 5s. Note that an arbitrary amount of output may be consumed by the regular expression. This usually means that expect cannot be used after ExpectRegexp. func (*TestCmd) InputLine Uses func (tt *TestCmd) InputLine(s string) string InputLine writes the given text to the child's stdin. This method can also be called from an expect template, e.g.: geth.expect(`Passphrase: {{.InputLine "password"}}`) func (*TestCmd) Interrupt Uses func (tt *TestCmd) Interrupt() func (*TestCmd) Kill Uses func (tt *TestCmd) Kill() func (*TestCmd) Run Uses func (tt *TestCmd) Run(name string, args ...string) Run exec's the current binary using name as argv[0] which will trigger the reexec init function for that name (e.g. "geth-test" in cmd/geth/run_test.go) func (*TestCmd) SetTemplateFunc Uses func (tt *TestCmd) SetTemplateFunc(name string, fn interface{}) func (*TestCmd) StderrText Uses func (tt *TestCmd) StderrText() string StderrText returns any stderr output written so far. The returned text holds all log lines after ExpectExit has returned. func (*TestCmd) WaitExit Uses func (tt *TestCmd) WaitExit() Package cmdtest imports 16 packages (graph). Updated 2020-11-20. Refresh now. Tools for package owners.
__label__pos
0.647436
How to Separate Numbers in Excel: A Step-by-Step Guide for Beginners how to separate numbers in excel Separating numbers in Excel can be a lifesaver when you need to organize data efficiently. Whether you’re dealing with phone numbers, product codes, or any other numerical data, Excel’s features can simplify the process. By using built-in functions and tools, you can easily split numbers into different columns or extract specific digits. Here’s a quick guide to help you do just that. Step-by-Step Tutorial on How to Separate Numbers in Excel This tutorial will walk you through the steps needed to separate numbers in Excel. By the end of this, you’ll know how to use Text to Columns and various functions to split and manipulate your numerical data. Step 1: Select the Cells with Numbers First, highlight the cells containing the numbers you want to separate. This is your starting point. Make sure you select all cells that need to be split to avoid repeating the process for each group of numbers. Step 2: Open the Text to Columns Wizard Go to the Data tab on the Ribbon and click on "Text to Columns." This wizard is a powerful tool that can split your data based on a delimiter or fixed width. It’s versatile and user-friendly. Step 3: Choose the Delimiter Option In the Text to Columns Wizard, select "Delimited" and click Next. This step allows you to specify what character will separate the numbers. It could be a comma, space, or any other symbol. Step 4: Select the Delimiter Choose the delimiter that fits your data, such as a comma or space, and click Next. This ensures Excel knows exactly where to split the numbers. Preview your data to make sure it’s being separated correctly. Step 5: Finish the Wizard Click Finish to complete the process. Your selected numbers will now be separated into different columns based on the delimiter you specified. Step 6: Use Functions for More Complex Splitting For more advanced splitting, use functions like LEFT, RIGHT, MID, and FIND. These functions allow you to extract specific parts of numbers. For example, use the LEFT function to get the first few digits, or the MID function for numbers in the middle. After completing these steps, your numbers will be neatly separated into different columns or extracted into new cells as needed. Tips for Separating Numbers in Excel • Double-Check Your Delimiters: Ensure that the delimiter you choose matches what’s in your data. • Backup Your Data: Always make a copy of your data before performing bulk operations. • Preview Data: Use the preview pane in the Text to Columns Wizard to see how your data will be split. • Learn Functions: Having a good grasp of functions like LEFT, RIGHT, MID, and FIND can be incredibly useful for more complex tasks. • Practice: The more you work with Excel’s features, the more comfortable you’ll become. Frequently Asked Questions What if my data doesn’t have a clear delimiter? You can use fixed widths instead of delimiters in the Text to Columns Wizard. This works well if the segments of your numbers are always the same length. Can I separate numbers into rows instead of columns? Yes, after separating numbers into columns, you can use the TRANSPOSE function to turn columns into rows. How do I undo a Text to Columns action? Simply press Ctrl + Z immediately after performing the action. What if my numbers are mixed with text? Use the FIND function to locate the position of numbers and then apply the LEFT, RIGHT, or MID functions accordingly. Can I automate this process? Yes, you can record a macro to automate the Text to Columns process. Summary 1. Select the cells with numbers. 2. Open the Text to Columns wizard. 3. Choose the delimiter option. 4. Select the delimiter. 5. Finish the wizard. 6. Use functions for more complex splitting. Conclusion Separating numbers in Excel might seem daunting at first, but it’s a straightforward process once you get the hang of it. By following these steps, you can quickly organize and manipulate your numerical data, making your spreadsheets much more efficient. Remember, Excel is a powerful tool, and the more you explore its features, the more effective you’ll become in managing your data. If you’re keen to delve deeper, consider exploring additional Excel functions and formulas that can further streamline your tasks. So, what are you waiting for? Fire up Excel and start separating those numbers today! Get Our Free Newsletter How-to guides and tech deals You may opt out at any time. Read our Privacy Policy
__label__pos
0.999573
How to echo newline? Asked by: Fabiola Flatley Score: 4.3/5 (73 votes) Using echo Note echo adds \n at the end of each sentence by default whether we use -e or not. The -e option may not work in all systems and versions. Some versions of echo may even print -e as part of their output. How do you echo to the next line? Uses of \n in Bash 1. String in double quote: echo -e "This is First Line \nThis is Second Line" 2. String in single quote: echo -e 'This is First Line \nThis is Second Line' 3. String with $ prefix: echo $'This is First Line \nThis is Second Line' 4. Using printf command: printf "This is First Line \nThis is Second Line" Does echo append a newline? echo adds a newline by default. ... Also, if you're running this on a linux system and opening the file on a windows or mac system, make sure your editor supports *nix newlines, or it'll appear all on one line even though it's on multiple lines. How do you echo without a new line? The best way to remove the new line is to add '-n'. This signals not to add a new line. When you want to write more complicated commands or sort everything in a single line, you should use the '-n' option. So, it won't print the numbers on the same line. How do I go to a new line in terminal? Alternatively, instead of typing Enter , you can type Ctrl-V Ctrl-J . That way, the newline character (aka ^J ) is entered without the current buffer being accepted, and you can then go back to editing the first line later on. ( \026 being the ^V character). Is there a way to echo a blackslash followed by newline in bash? (2 Solutions!!) 35 related questions found How do you go to the next line in terminal without executing? With bash , after you've written the command you don't want to execute, press Control-A to move the cursor to the beginning of the line, type # and then press Enter. How do you go to a new line in bash? On the command line, press Shift + Enter to do the line break inside the string. How would you output hello without a newline in Linux? I am able to do this in bash, using: echo -ne HELLO > file. txt and then, 'HELLO' is written into file. txt without the newline character to be added in the end of the file. How do I make my curls quieter? The -s or --silent option act as silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for, potentially even to the terminal/stdout unless you redirect it. What does a echo do? Amazon Echo is a smart speaker that responds to voice commands using Alexa, its artificially intelligent personal assistant. All Echo models can answer questions, research the internet, command smart home devices, and stream music. What is the difference between echo and printf in Unix? echo always exits with a 0 status, and simply prints arguments followed by an end of line character on the standard output, while printf allows for definition of a formatting string and gives a non-zero exit status code upon failure. printf has more control over the output format. What is curl -- silent? curl(1) --silent. transfer a URL. -s, --silent Silent or quiet mode. Don't show progress meter or error messages. Do not output the trailing newline means? n do not output the trailing newline. So it prints the string and does not go to the new line after that (which is the default behavior), so the output of the next command will be printed on the right side of the echoed string. What flag can be used with Echo to disable output of the trailing newline? When the -n option is used, the trailing newline is suppressed. If the -e option is given, the following backslash-escaped characters will be interpreted: \\ - Displays a backslash character. Why does echo add new line? 1. Overview. In many cases, we need to add a new line in a sentence to format our output. ... 2. Using echo. The echo command is one of the most commonly used Linux commands for printing to standard output: $ echo "test statement \n to separate sentences" test statement \n to separate sentences. ... 3. Using printf. ... 4. Using $ ... 5. Conclusion. How do you move to the next line in Linux? Press ctrl + o, then $ sign. Now press Esc + i and finally hit enter and this will lead you to the next line. How do I add a new line in Linux? For example, in Linux a new line is denoted by “\n”, also called a Line Feed. In Windows, a new line is denoted using “\r\n”, sometimes called a Carriage Return and Line Feed, or CRLF. Adding a new line in Java is as simple as including “\n” , “\r”, or “\r\n” at the end of our string. How do you go to the next line in Terminal Mac? To send an LF to the Terminal, press Return. You should get a new line, depending on the application you have running. To put the CRLF's on the end of each header, as required by the HTTP protocol, press Control-V, Return, Return. How do you start a new line in Terminal Mac? You can use ctrl-A to jump to the beginning of the line and ctrl-E to jump to the end. ctrl-XX (hold ctrl and press 'x' twice) will jump to the beginning of the line and then back to the current position the second time you use it. What is curl D option? -d, --data (HTTP) Sends the specified data in a POST request to the HTTP server, in the same way that a browser does when a user has filled in an HTML form and presses the submit button. This will cause curl to pass the data to the server using the content-type application/x-www-form-urlencoded. What is curl K? This option has existed in the curl tool since the early days, and has been frequently misused ever since. ... Or I should perhaps call it “overused”. What is curl M? M curl is a stronger curl than the L curl and it helps to make your client's eye more vivid and defined. In fact, the curl is as strong as D curl but is the perfect curl to cover the iris of the eye, whilst it has a shorter adherent surface to ensure the curl starts from the root of the lash. Is echo the same as print? echo and print are more or less the same. They are both used to output data to the screen. The differences are small: echo has no return value while print has a return value of 1 so it can be used in expressions. echo can take multiple parameters (although such usage is rare) while print can take one argument. Should I use echo or printf? While printf is better for many reasons, most people still use echo because the syntax is simpler. The main reasons why you should prefer printf are: echo is not standardized, it will behave differently on different systems.
__label__pos
0.999926
工学1号馆 home java反射实战 By Wu Yudong on June 14, 2015 原创文章,转载请注明: 转载自工学1号馆 本文将利用java反射的理论知识进行实战 在之前的文章《java反射的应用》和《java核心系列11-反射》中,进行了java反射的理论与相关实践,本文将就一些常用的属性进行实战 反射的基石Class Class类代表Java类,它的各个实例对象又分别对应什么呢? 对应各个类在内存中的字节码,例如,Person类的字节码,ArrayList类的字节码,等等。 一个类被类加载器加载到内存中,占用一片存储空间,这个空间里面的内容就是类的字节码,不同的类的字节码是不同的,所以它们在内存中的内容是不同的,这一个个的空间可分别用一个个的对象来表示,这些对象显然具有相同的类型--Class类 如何得到各个字节码对应的实例对象( Class类型) 类名.class,例如,System.class 对象.getClass(),例如,new Date().getClass() Class.forName("类名"),例如,Class.forName("java.util.Date"); 九个预定义Class实例对象: 参看Class.isPrimitive方法的帮助 int.class == Integer.TYPE 数组类型的Class实例对象 Class.isArray() 总之,只要是在源程序中出现的类型,都有各自的Class实例对象,例如,int[],void… 跑个程序吧: public static void main(String[] args) throws Exception{ // TODO Auto-generated method stub String str1 = "wuyudong"; Class cls1 = str1.getClass(); Class cls2 = String.class; Class cls3 = Class.forName("java.lang.String"); System.out.println(cls1 == cls2); //true System.out.println(cls1 == cls3); //true System.out.println(int.class.isPrimitive()); //true System.out.println(String.class.isPrimitive()); //false System.out.println(int.class == Integer.class); //false System.out.println(int.class == Integer.TYPE); //true System.out.println(int[].class.isPrimitive()); //false } 理解反射的概念 反射就是把Java类中的各种成分映射成相应的java类。例如,一个Java类中用一个Class类的对象来表示,一个类中的组成部分:成员变量,方法,构造方法,包等等信息也用一个个的Java类来表示,就像汽车是一个类,汽车中的发动机,变速箱等等也是一个个的类。表示java类的Class类显然要提供一系列的方法,来获得其中的变量、方法、构造方法、修饰符、包等信息,这些信息就是用相应类的实例对象来表示,它们是Field、Method、Contructor、Package等等。 一个类中的每个成员都可以用相应的反射API类的一个实例对象来表示,通过调用Class类的方法可以得到这些实例对象后,得到这些实例对象后有什么用呢?怎么用呢?这正是学习和应用反射的要点 Constructor类 Constructor代表某个类中的一个构造方法 得到某个类所有的构造方法: Constructor [] constructors= Class.forName("java.lang.String").getConstructors(); 得到某一个指定的构造方法: Constructor constructor = Class.forName(“java.lang.String”).getConstructor(StringBuffer.class); //获得方法时要用到类型 创建实例对象: String str = new String(new StringBuffer("abc"));  //通常方式 String str = (String)constructor.newInstance(new StringBuffer("abc")); //反射方式 //调用获得的方法时要用到上面相同类型的实例对象 Class.newInstance()方法: String obj = (String)Class.forName("java.lang.String").newInstance(); 该方法内部先得到默认的构造方法,然后用该构造方法创建实例对象。 该方法内部的具体代码是怎样写的呢?用到了缓存机制来保存默认构造方法的实例对象。 跑个程序: //使用反射实现:new String(new StringBuffer("wuyudong")); Constructor constructor1 = String.class.getConstructor(StringBuffer.class);//获取对应的Constructor String str = (String)constructor1.newInstance(new StringBuffer("wuyudong")); //实例化 Field类 Field代表某个类中的一个成员变量 问题:得到的Field对象是对应到类上面的成员变量,还是对应到对象上的成员变量?类只有一个,而该类的实例对象有多个,如果是与对象关联,哪关联的是哪个对象呢?所以字段fieldX 代表的是x的定义,而不是具体的x变量。 示例代码: 首先定义一个实验反射的类ReflectPoint: public class ReflectPoint { private int x; public int y; public String str1 = "bsdbngas"; public String str2 = "assadbfbb"; public ReflectPoint(int x, int y) { this.x = x; this.y = y; } public String toString() { return str1 + ":" + str2; } } 获取实例化对象中的变量的值 public static void main(String[] args) throws Exception{ // TODO Auto-generated method stub Constructor constructor1 = String.class.getConstructor(StringBuffer.class);//获取对应的Constructor String str = (String)constructor1.newInstance(new StringBuffer("wuyudong")); //实例化 System.out.println(str); ReflectPoint point = new ReflectPoint(1,7); Field fieldY = Class.forName("itcast.day1.ReflectPoint").getField("y"); System.out.println(fieldY.get(point)); //error! Field x = Class.forName("cn.itcast.corejava.ReflectPoint").getField("x"); Field fieldX = Class.forName("itcast.day1.ReflectPoint").getDeclaredField("x"); //由于x为私有变量 fieldX.setAccessible(true); //由于x为私有变量 System.out.println(fieldX.get(point)); } 实战:将任意一个对象中的所有String类型的成员变量所对应的字符串内容中的"b"改成"a"。 public static void printString(Object obj) throws Exception{ Class cls = obj.getClass(); //获取参数对应的类 Field[] fields = cls.getFields(); //获取该类所有的字段 for(Field field : fields) { //迭代遍历所有的字段 if(field.getType() == String.class) { //判断字段的类型是否为String String oldstr = (String)field.get(obj); //获取String域 String newstr = oldstr.replace('b', 'a'); //替换操作 field.set(obj, newstr); //设置域 } } } Method类 Method代表某个类中的一个成员方法,得到类中的某一个方法。例子: Method charAt = Class.forName("java.lang.String").getMethod("charAt", int.class); //调用方法: System.out.println(str.charAt(1));  //通常方式 System.out.println(charAt.invoke(str, 1)); //反射方式 如果传递给Method对象的invoke()方法的第一个参数为null,这说明该Method对象对应的是一个静态方法! 用反射方式执行某个类中的main方法 目标: 写一个程序,这个程序能够根据用户提供的类名,去执行该类中的main方法。用普通方式调完后,要明白为什么要用反射方式去调用 问题: 启动Java程序的main方法的参数是一个字符串数组,即public static void main(String[] args),通过反射方式来调用这个main方法时,如何为invoke方法传递参数呢?按jdk1.5的语法,整个数组是一个参数,而按jdk1.4的语法,数组中的每个元素对应一个参数,当把一个字符串数组作为参数传递给invoke方法时,javac会到底按照哪种语法进行处理呢?jdk1.5肯定要兼容jdk1.4的语法,会按jdk1.4的语法进行处理,即把数组打散成为若干个单独的参数。所以,在给main方法传递参数时,不能使用代码mainMethod.invoke(null,new String[]{“xxx”}),javac只把它当作jdk1.4的语法进行理解,而不把它当作jdk1.5的语法解释,因此会出现参数类型不对的问题。 解决办法: mainMethod.invoke(null,new Object[]{new String[]{"xxx"}}); mainMethod.invoke(null,(Object)new String[]{"xxx"}); 编译器会作特殊处理,编译时不把参数当作数组看待,也就不会数组打散成若干个参数了 具体代码如下: class TestArguments { //首先定义一个实验类 public static void main(String[] args) { for(String arg : args) { System.out.println(arg); } } } public class ReflectTest { public static void main(String[] args) throws Exception{ Method mMain = Class.forName(args[0]).getMethod("main", String[].class); mMain.invoke(null, new Object[]{new String[]{"aaa","bbb"}}); mMain.invoke(null,(Object)new String[]{"aaa","bbb"}); } 如果文章对您有帮助,欢迎点击下方按钮打赏作者 Comments No comments yet. To verify that you are human, please fill in "七"(required)
__label__pos
0.917867
string Image string question aburabe published Default 1 Like 3Answers 1 Comment question cchapel edited Default This question has an accepted answer. Accepted 0 Likes 1Answer 0 Comments question ThomasRushton answered Default 1 Like 1Answer 0 Comments question ThomasRushton answered Default 0 Likes 2Answers 0 Comments question anthony.green answered Default 0 Likes 1Answer 0 Comments question Kev Riley converted comment to answer Default This question has an accepted answer. Accepted 1 Like 2Answers 0 Comments question JohnM edited Default 0 Likes 0Answers 0 Comments question Vamshi09463 answered Default This question has an accepted answer. Accepted 2 Likes 5Answers 1 Comment question ThomasRushton edited Default 0 Likes 3Answers 0 Comments question SSGC commented Default 1 Like 2Answers 0 Comments question siugoalie78 commented Default This question has an accepted answer. Accepted 0 Likes 1Answer 1 Comment question David Wimbush answered Default 0 Likes 1Answer 0 Comments question BLT answered Default This question has an accepted answer. Accepted 0 Likes 2Answers 0 Comments question g_yerden commented Default This question has an accepted answer. Accepted 0 Likes 1Answer 3 Comments question GPO edited Default This question has an accepted answer. Accepted 1 Like 1Answer 5 Comments 50 Posts 47 Users 0 Followers Topic Experts There are currently no experts identified for this topic. Can you answer questions in this topic area? Community members who provide answers that are marked as correct earn reputation and may become recognized as topic experts.
__label__pos
0.886691
Percent increase from 309 to 913 Percent Increase This page will answer the question "What is the percent increase from 309 to 913?" and also show you how to calculate the percent increase from 309 to 913. Before we continue, note that "percent increase from 309 to 913" is the same as "the percentage increase from 309 to 913". Furthermore, we will refer to 309 as the initial value and 913 as the final value. So what exactly are we calculating? The initial value is 309, and then a percent is used to increase the initial value to the final value of 913. We want to calculate what that percent is! Here are step-by-step instructions showing you how to calculate the percent increase from 309 to 913. First, we calculate the amount of increase from 309 to 913 by subtracting the initial value from the final value, like this: 913 - 309 = 604 To calculate the percent of any number, you multiply the value (n) by the percent (p) and then divide the product by 100 to get the answer, like this: (n × p) / 100 = Answer In our case, we know that the initial value (n) is 309 and that the answer (amount of increase) is 604 to get the final value of 913. Therefore, we fill in what we know in the equation above to get the following equation: (309 × p) / 100 = 604 Next, we solve the equation above for percent (p) by first multiplying each side by 100 and then dividing both sides by 309 to get percent (p): (309 × p) / 100 = 604 ((309 × p) / 100) × 100 = 604 × 100 309p = 60400 309p / 309 = 60400 / 309 p = 195.46925566343 Percent Increase ≈ 195.4693 That's all there is to it! The percentage increase from 309 to 913 is 195.4693%. In other words, if you take 195.4693% of 309 and add it to 309, then the sum will be 913. The step-by-step instructions above were made so we could clearly explain exactly what a percent increase from 309 to 913 means. For future reference, you can use the following percent increase formula to calculate percent increases: ((f - n)/n) × 100 = p f = Final Value n = Initial Value p = Percent Increase Once again, here is the math and the answer to calculate the percent increase from 309 to 913 using the percent increase formula above: ((f - n)/n) × 100 = ((913 - 309)/309) × 100 = (604/309) × 100 = 1.9546925566343 × 100 ≈ 195.4693 Percent Increase Calculator Go here if you need to calculate another percent increase. Percent increase from 309 to 914 Here is the next Percent Increase Tutorial on our list that may be of interest. Copyright  |   Privacy Policy  |   Disclaimer  |   Contact
__label__pos
0.984982
MongoDB Datenmodellierung Wann Adobe Commerce Intelligence abruft MongoDB -Daten in ein relationales Modell übersetzt werden. Die schlechte Nachricht: Die meisten Datenmuster werfen zwar kein Problem auf, es gibt jedoch einige wenige, die von Commerce Intelligence, da die Übersetzung in ein relationales Modell erfolgt. Die gute Nachricht: All diese Muster lassen sich vermeiden. Subverschachtelte Arrays subnested Wenn Ihre Sammlung wie im folgenden Beispiel aussieht: Commerce Intelligence repliziert nur die Daten im Elemente-Array. Daten aus dem Array der Unterelemente werden nicht abgerufen. { _id: 0000000000000001 items: [ { _id: 0000000000000002 subItems: [ { _id: 0000000000000003 name: "Donut" description: "glazed" } ] } ] } Variablenobjektschlüssel varobjectkeys Sammlungen, die Objekte mit variablen Objektschlüsseln enthalten, werden nicht repliziert in Commerce Intelligence. Beispiel: { _id: 0000000000000001 friends: { 0000000000000002: "Jimmy", 0000000000000004: "Roger", 0000000000000005: "Susan" }, } Dies tritt normalerweise dann auf, wenn ein Objekt verwendet wird und ein Array angemessener wäre. Nacharbeiten Sie nun das obige Beispiel: { _id: 0000000000000001 friends: [ { friend_id: 0000000000000002, name: "Jimmy" }, { friend_id: 0000000000000004, name: "Roger" }, { friend_id: 0000000000000005, name: "Susan"} ] } recommendation-more-help e1f8a7e8-8cc7-4c99-9697-b1daa1d66dbc
__label__pos
0.938781
What is Docker and why is it vital?  2021-01-12 What is Docker and why is it vital? Docker definition Docker is a tool designed to create, deploy and run apps more easily by using containers. Those containers allow developers to pack an app with necessary parts such as libraries and dependence and send it under a package. Therefore, thanks to containers, apps will run on every other Linux machine regardless of any custom settings the machine might have different from the machine used to write and test the code. In another way, Docker is kind of similar to the virtual machine. The difference is that instead of creating an entire virtual operating system, Docker allows applications to use the same Linux kernel as the system they are running on and only requires the applications shipped with things not already run on the server. This helps increase the effectiveness and decrease the size of an app. And more important, Docker is open-source. This means everyone can contribute to Docker and extend it to meet their own requirement if they need unavailable additional functions   Who is Docker for? Docker is a tool designed to bring benefits for developers and system administrators, making it a part of DevOps tools. It means they can focus on the code without worrying about the system which in the end it will run. It also allows them to start by using one of the thousand designed programmings to run in the Docker package as a part of their apps. For those make operation, Docker brings efficiency and the ability to decline necessary systems due to lower price. Docker and security Docker ensures security for apps running on the shared environment, but those containers aren’t a substitute for implementing the appropriate security measures. Dan Walsh, a computer security leader best known for his work on SELinux, offers his views on the importance of ensuring Docker containers are secure. He also provides a detailed breakdown of the security features available in Docker and how they work. We “Hachinet Software” are Vietnamese IT outsourcing company based software service and talented provider with dynamic, energetic, dedicated and enthusiastic teams. We specialize in the followings: 1. Web application (.NET, JAVA, PHP, etc) 2. Framework (ASP, MVC, AngularJS, Angular6, Node JS, Vue JS) 3. Mobile application: IOS (Swift, Object C), Android (Kotlin, Android) 4. System applications (Cobol, ERP, etc), 5. New Technology (Blockchain, etc). If you are interested in our service or looking for an IT outsourcing partner in Vietnam, do not hesitate to contact us at [email protected]
__label__pos
0.951976
5th grade adding letters problem posted by . Not having any luck with getting help with this Homework question? Anyone good with these type of word problems? e g g + e g g --------- p a g e a=____ e=____ g=____ p=_1___ How we started: p = 1, and g is greater than 5. • 5th grade adding letters problem - you need to grunt through these. Respond to this Question First Name School Subject Your Answer Similar Questions 1. Adding letters instead of numbers Is anybody smarter than a fifth grader on this problem? 2. Linear equations I am not good at equations I need to complete this chart for this problem... y=-1/2x-4 X y (x,Y) __ -1 ____ -2 __ ____ __ -4 ____ 2 ___ ____ 8 ___ ____ I tried the first problem but I do not think it is correct.. y=-1/2x-4 x=-1 y=-1/2(-1)-4 … 3. Math - Magic Square??? My daughter is in 2nd grade and has a worksheet called Magic 26. It wants her to use the numbers 1 -12. Each row, column, and diagonal must equal 26. The four corners and four center numbers must equal 26 too. (example of puzzle below) … 4. Grammar Fill in the sentences with adverbs. Do not repeat. 1.______ my friend and I _____ went _____. 2.We had _______ wanted to go ____, so we went _____ ______ and ____. 3. When we arrived _____, it was ____ dark, so we ____ went _____. … 5. science Words to use: Accelerate - Sliding - Brake - Friction - Slipping - Inertia Velocity - Wheel - Strength - Static - Terminal velocity Questions 1. An object will ____? 6. math Write the first six terms of each of the sequences whose nth term is a)(-3)n { ____, ____, ____, ____ , ____, ____ } b)3 – 4n { ____, ____, ____, ____ , ____, ____ } 7. math 7. ____ is two thirds of ____ 8. ____ is three quarters of ____ 9. ____ is a quarter of ____ 10. ____ is a third of ____ 8. math 8. ____ is three quarters of ____ 9. ____ is a quarter of ____ 10. ____ is a third of ____ 9. Spanish 2 I am writing things that I do on the ship. Choose the correct verb (based on its meaning) to complete each sentence. A. Taco B. Nado C. Almuerzo D. Juego E. Escucho F. Miro. G. Abro H. Hablo I. Voy J. Como 1. ____ la musica. 2. ____ … 10. Finding The Missing Percent/Math 1) ____ of 50% = 45.5 2) ____ of 3 = 1.8 3) 56% of ____ = 3.92 4)69% of 30 = ____ 5)29% of ____ = 26.1 6) ____ of 20 = 6.2 7)82% of 10 = ____ More Similar Questions
__label__pos
0.769089
Probability involving n dice This means that after throwing the first six dice, the probability of a case that has at least one die that shows 3 is\begin{pmatrix}1/6 & 1/6 & 1/6 & 1/6\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 1\\ \vdots & \vdots & 1 & 0.\end{pmatrixf • #1 67 4 I'm studying probability and am currently stuck on this question: Let's say we have n distinct dice, each of which is fair and 6-sided. If all of these dice are rolled, what is the probability that there is at least one pair that sums up to 7? I interpreted the above as being equivalent to the following: 1 - (Probability that there is no pair that sums up to 7) So if I were to consider just one pair of dice, then the probability that the pair adds up to 7 is 1/6, I think? So Pr(one pair doesn't add up to 7) = 5/6. But then I'm kind of stuck on how to proceed. Because there are lots of possible pairs amongst the n die, and some of these pairs overlap...for example, (die1, die2) is a pair, (die1, die3) is a pair, and so on. So I don't know how to account for these overlaps. I tried breaking down the problem into a number of cases where there is no way for any pair to add up to 7: (1) All of the dice show exactly one number. (2) All of the dice show exactly two numbers which do not add up to 7 -- e.g. All the dice show either 3 or 6. Or all the dice show 2 or 4. And so on... (3) All of the dice show exactly three numbers e.g. (1, 2, 4), no two of which can possibly add up to 7. (4) If all of the dice show 4 or more numbers, then there MUST exist a pair that adds up to 7, so I don't consider any of these cases. I suppose that I could add up the probabilities for all of the above cases, then I'd have the total probability that no two dice add up to 7? But then, how do I compute these? In fact, is there a better / easier approach than the one I have thought up? Thanks.   • #2 I might do it the way you suggest. I'd look at using induction on n.   • #3 Because there are lots of possible pairs amongst the n die, Do you know a function which can tell you exactly how many pairs there are?   • #4 I definitely agree that the way to begin this problem is to find the probability that no pairs sum to 7, which is of course 1 - the probability you are ultimately seeking. It might help organize the calculation if you specify what the sample space is: the set {1, 2, 3, 4, 5, 6} raised to the cartesian power of n, just a fancy way of saying the set of "all ordered n-tuples consisting of any of those 6 numbers". Then specify what the probability is for each point of this space, and what subset of the space you are finding the probability of. Finally, break that subset into disjoint pieces, each of which you can find the probability of. You have virtually done this already, but it doesn't hurt to conceive of what you have done systematically, since that helps to see how various pieces that you have broken your target event into are disjoint from each other. It will help if you figure out how many possible pairs of these numbers do not sum to 7. And then how many triples of these numbers involve no pair that sums to 7.   • #5 Let's do an example: (2) You know that two numbers are shown from the ##n## dice. We have the following choices, and all these choices have the same probabilities: a) 1 and 2 b) 2 and 3 c) 1 and 3 ... (How many choices do we have for two numbers to show?) Let's analyse case (a). The other cases are similar. So we know that all dice show either ##1## or ##2##. Since the dice are all distinct, this means there are ##2^n## ways of ##n## dice to show either ##1## or ##2##. However, ##2## of these ways are to show only ##1## and ##2##. So there are ##2^n - 2## ways to satisfy ##(a)##. Another way of doing this is by Markov chains. Do you know anything about this?   • #6 ... I tried breaking down the problem into a number of cases where there is no way for any pair to add up to 7: (1) All of the dice show exactly one number. (2) All of the dice show exactly two numbers which do not add up to 7 -- e.g. All the dice show either 3 or 6. Or all the dice show 2 or 4. And so on... (3) All of the dice show exactly three numbers e.g. (1, 2, 4), no two of which can possibly add up to 7. ... Let [itex]p_k(n)[/itex] be the probability that case k has occurred after n dice have been thrown. It is not hard to show that these probabilities satisfy a recurrence relation [tex]\begin{pmatrix}p_1(n+1) \\ p_2(n+1) \\ p_3(n+1) \end{pmatrix} = \begin{pmatrix} \frac{1}{6} & 0 & 0 \\ \frac{2}{3} & \frac{1}{3} & 0 \\ 0 & \frac{1}{3} & \frac{1}{2} \end{pmatrix} \begin{pmatrix}p_1(n) \\ p_2(n) \\ p_3(n) \end{pmatrix}[/tex] with [itex]p_1(1)=1, p_2(1)=0=p_3(1)[/itex]. Since the matrix is lower triangular and the eigenvalues (along the diagonal) are distinct, it's not overly difficult to find the matrix powers to solve the recurrence relation. After a bit of algebra we find that the probability of obtaining at least one pair adding to 7 after n throws of the dice is [tex]1-(p_1(n)+p_2(n)+p_3(n)) = 1-\frac{4}{2^{n-1}}+\frac{4}{3^{n-1}}-\frac{1}{6^{n-1}}[/tex]   Suggested for: Probability involving n dice Replies 6 Views 800 Replies 41 Views 3K Replies 3 Views 950 Replies 4 Views 837 Replies 42 Views 3K Back Top
__label__pos
0.998255
Factors of 1083 So you need to find the factors of 1083 do you? In this quick guide we'll describe what the factors of 1083 are, how you find them and list out the factor pairs of 1083 for you to prove the calculation works. Let's dive in! Want to quickly learn or show students how to find the factors of 1083? Play this very quick and fun video now! Factors of 1083 Definition When we talk about the factors of 1083, what we really mean is all of the positive and negative integers (whole numbers) that can be evenly divided into 1083. If you were to take 1083 and divide it by one of its factors, the answer would be another factor of 1083. Let's look at how to find all of the factors of 1083 and list them out. How to Find the Factors of 1083 We just said that a factor is a number that can be divided equally into 1083. So the way you find and list all of the factors of 1083 is to go through every number up to and including 1083 and check which numbers result in an even quotient (which means no decimal place). Doing this by hand for large numbers can be time consuming, but it's relatively easy for a computer program to do it. Our calculator has worked this out for you. Here are all of the factors of 1083: • 1083 ÷ 1 = 1083 • 1083 ÷ 3 = 361 • 1083 ÷ 19 = 57 • 1083 ÷ 57 = 19 • 1083 ÷ 361 = 3 • 1083 ÷ 1083 = 1 All of these factors can be used to divide 1083 by and get a whole number. The full list of positive factors for 1083 are: 1, 3, 19, 57, 361, and 1083 Negative Factors of 1083 Technically, in math you can also have negative factors of 1083. If you are looking to calculate the factors of a number for homework or a test, most often the teacher or exam will be looking for specifically positive numbers. However, we can just flip the positive numbers into negatives and those negative numbers would also be factors of 1083: -1, -3, -19, -57, -361, and -1083 How Many Factors of 1083 Are There? As we can see from the calculations above there are a total of 6 positive factors for 1083 and 6 negative factors for 1083 for a total of 12 factors for the number 1083. There are 6 positive factors of 1083 and 6 negative factors of 1083. Wht are there negative numbers that can be a factor of 1083? Factor Pairs of 1083 A factor pair is a combination of two factors which can be multiplied together to equal 1083. For 1083, all of the possible factor pairs are listed below: • 1 x 1083 = 1083 • 3 x 361 = 1083 • 19 x 57 = 1083 We have also written a guide that goes into a little more detail about the factor pairs for 1083 in case you are interested! Just like before, we can also list out all of the negative factor pairs for 1083: • -1 x -1083 = 1083 • -3 x -361 = 1083 • -19 x -57 = 1083 Notice in the negative factor pairs that because we are multiplying a minus with a minus, the result is a positive number. So there you have it. A complete guide to the factors of 1083. You should now have the knowledge and skills to go out and calculate your own factors and factor pairs for any number you like. Feel free to try the calculator below to check another number or, if you're feeling fancy, grab a pencil and paper and try and do it by hand. Just make sure to pick small numbers! Cite, Link, or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! • "Factors of 1083". VisualFractions.com. Accessed on October 1, 2023. http://visualfractions.com/calculator/factors/factors-of-1083/. • "Factors of 1083". VisualFractions.com, http://visualfractions.com/calculator/factors/factors-of-1083/. Accessed 1 October, 2023. • Factors of 1083. VisualFractions.com. Retrieved from http://visualfractions.com/calculator/factors/factors-of-1083/. Factors Calculator Want to find the factor for another number? Enter your number below and click calculate. Find Factors Next Factor Calculation Factors of 1084
__label__pos
0.579026
Search Images Maps Play YouTube News Gmail Drive More » Sign in Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader. Patents 1. Advanced Patent Search Publication numberUS3227865 A Publication typeGrant Publication dateJan 4, 1966 Filing dateJun 29, 1962 Priority dateJun 29, 1962 Publication numberUS 3227865 A, US 3227865A, US-A-3227865, US3227865 A, US3227865A InventorsHoernes Gerhard E Original AssigneeIbm Export CitationBiBTeX, EndNote, RefMan External Links: USPTO, USPTO Assignment, Espacenet Residue checking system US 3227865 A Images(3) Previous page Next page Description  (OCR text may contain errors) Jan. 4, 1966 G. E. HOERNES 3,227,865 RESIDUE CHECKING SYSTEM Filed June 29, 1962 Sheets-Sheet 1 1 I13 I I17 0b Bib MAIN UNIT 19 Rb 'AR T T /25 29 T T 27 Ar 28 n Rr RESIDUE CHECKING UNIT c ERROR 21 ,26 0 57 4? M SUBTRACTOR I Z No Br E ERROR 2 INVENTOR Y XGY GERHARD E HOERNES o 1 2 B ATTORNEY United States Patent 3,227,865 RESIDUE CHECKING SYSTEM Gerhard E. Hoernes, Poughkeepsie, N.Y., assignor to International Business Machines Corporation, New York, N.Y., a corporation of New York Filed June 29, 1962, Ser. No. 206,423 6 Claims. (Cl. 235-153) This invention relates to electronic residue code systems; more particularly, the invention relates to an electronic residue code checking system that is especially suited for checking the arithmetic division operation of a digital computer. It is generally known that one of the preferred ways of error checking the operations of a computer involves the familiar duplication technique; that is, for every operation that is performed by the computer, the identical operation is repeated and the two results generated are compared for identity. Identity between the two independently generated results gives assurance that the operation has been performed correctly. It is also well known that the main disadvantages of the duplication checking technique are that either twice the equipment, or twice the amount of time, is generally required to generate the two results for comparison. In order to minimize these disadvantages, the prior art has developed a technique which involves the use of the residue, or modulo, code. It is noted that the residue code, also known as the modulo code, is formed from a natural number (the term natural number is used to denote an original number in some number system) by defining the residue of any integer to be the least positive integral remainder after the given integer has been divided by another number, known .as the base. Thus, the numher 7 is defined to be congruent to lMOD3. (Divide 7 by 3 and the remainder is 1.) The number 1 is referred to as the residue. Although there are some restrictions, the choice of a base is arbitrary within limits; thus, the number 7 can be said to be congruent to, alternatively, 1MOD3, 2MOD5, 0MOD7 etc. In the examples just given, the number 7 is representable by the residues 1, 2 or 0. Having once decided upon one fixed base, all natural numbers have their congruent residues. Arithmetic operations within the residue codeassum ing, for simplicity, one fixed base-are also defined and bear a one-to-one relationship to the corresponding arithmetic operation within the natural number domain. That is, residue representations of natural numbers can be added and, when added according to a truth table, will present a residue result which is identical to the residue of the corresponding result of the addition process in the natural number domain. Similarly, the subtraction process and the multiplication process are also defined within the residue domain by their respective truth tables and, again, the residue result of these processes will have a one-to-one correspondence with the residue of the result of corresponding operation in the natural number domain. An example will now be given which illustrates the above-mentioned properties of the residue code. Assume that it is desired to add the number 25 to the number 26. It is known that the result, or sum, will be 51. The residue of the number 25, to the base 3, is 1 that is, 25 divided by 3 will leave a remainder of 1. The residue of the number 26, to the base 3, is 2; that is, 26 divided by 3 will leave a remainder of 2. Therefore, the residue of the number 25 is a 1 and the residue of the number 26 is a 2. If these two residues are added to each other according to the rules of addition in the residue code, the result Will be a residue of 0. That is, residue 1 added to a residue 2, gives a residue 0. Comparing the residue of the addition process in the residue code with the residue of the result in the natural number system, it .is noted that the two residues are identical; namely, the sum of 51 has a residue of zero. That is, the residue developed from a residue calculation is identical to the residue of the result generated in the natural number domain. Similar examples can also be developed for the residue processes of subtraction and multiplication. A more detailed description of the abovementioned properties of the residue code may be had in an article by Harvey L. Garner, entitled The Residue Number System in the =IRE Transactions on Electronic Computers, June 1959, page 140. Thus, for the immediate discussion the residue code can be characterized in that it represents an index or abstract of the magnitude of a natural number. The one-to-one relationship of operations within the residue code to the related operations in the natural number domain has considerably simplified the duplicate residue circuitry necessary for a check on the operations of a digital computer. However, because of the peculiar properties of the residue code with respect to the division process, the residue code checking systems of the prior art have had to deviate from a true duplication check in the case of supervising the division operation of the computer. That is, division is defined (i.e., it will yield a meaningful result) in the residue code only when the corresponding division with natural numbers yields an integral result. It has heretofore not been possible to perform a duplicate check on the division process where the division process has yielded a remainder because the prior art duplicate residue circuitry has been unable to repeat the identical operation, namely, division. Accordingly, it is one principal object of this invention to provide a new and improved residue system for producing a residue equivalent of the quotient generated by a computer. A further object of this invention is to provide a new and improved residue checking system for supervising the division operation of a computer more nearly in accordance with the duplication technique of error checking. Previous residue checking systems have approached the supervision of a division operation of a computer limited by the common knowledge that the division process within the residue code is undefined when the corresponding division operation in the natural number domain does not yield an integral quotient. Accordingly, the prior art systems have obviated the need for performing a duplicate division operation by noting that, whenever a division process yields a remainder, rearrangement of the familiar division algorithm equates the product of the quotient and the divisor, on the one hand, and the difierence between the dividend and the remainder, on the other hand. Thus, the prior art residue checking systems have synthesized a check by utilizing a residue subtraction operation to form the residue difference between the dividend and the remainder, on the one hand, and the residue product of the residue of the quotient and the divisor, on the other hand, and have compared them for identity. In this manner, identity between the two residue quantities compared gives a reasonable assurance that the division operation of the computer was performed correctly. It is significant to note that neither of the generated residue quantities, between which a comparison is effected in the prior art, bears any relationship to the final result generated by the computer. That is, the check in the prior art has been an internal one between two fictitious quantities. More particularly, the product of the quotient and the divisor (as generated by the residue checking unit) bears no relationship to the quotient generated by the computer. This fact has several important consequences, among which are the following: (a) Since the residue checking system has not generated a quantity that has a direct relationship to the quantity generated by the computer, it is not possible to utilize the quantity generated by the checking system for purposes of transmitting such a quantity, along with the quantity generated by the computer, as a check for errors in transmission. (b) In the event an error is indicated in the prior art systems, the operation of the computer is usually halted because, as the residue generated quantity has no relationship to the computer generated quantity, it is not possible to transmit the residue generated quantity along with the erroneous computer generated quantity and postpone a check to a later time. Accordingly, it is a further principal object of this invention to provide a division residue checking system which generates an independent quantity that bears a direct relationship to the computer unit generated quantity and which is, therefore, suitable for checking for errors in transmission of the computer generated quantity. A further principal object of this invention is to provide a division residue checking system which allows the computer to continue to operate and postpone a check in case a discrepancy is noted between the computer generated result and the residue unit generated result. In accordance with the invention, there is provided a tandem residue unit which i adapted to cooperate with the main processing unit of a digital computer and which comprises means for generating a residue difference between the residue of the dividend and the residue of the remainder, and residue divider means for dividing that residue difference by the residue of the divisor, whereby the divider means generates a residue quotient which is available for comparison purposes with the computer generated quotient. With reference to the above-stated limitation of the residue code, namely that the process of a residue division is defined only when the resulting quotient is an integral, it is noted that the residue division means of the instant invention will perform this process and produce a valid result because the residue difference between the dividend and the remainder will always be integrally divisible by the residue of the divisor. This fact can be demonstrated, for example, by noting that, while 7 divided by 3 will not yield an integral quotient (i.e., the remainder is 1), 7 minus 1 (i.e. 6) divided by 3 will give an integral quotient, namely, 2. While the above example has been illustrated in the natural number domain, it holds equally as well within the residue code. Therefore, the residue division means of the instant invention yield a valid residue quotient, which quotient, if no error has resulted, will be identical to the residue of the computer generated quotient. Comparison means are included in the invention to effect a comparison between said two quotients and, if they are alike, will generate a no error signal, thereby indicating that the operation of the processing unit has been carried out without error. Assuming that, for some reason, there is a discrepancy between the two generated quotients and that the comparison unit, therefore, yields an error signal, various possible defects can be stipulated. It should be noted at this point that, in the ensuing description, only the occurrence of a single error is considered. Thus, for example, the error may lie in the incorrect generation of a quotient by the main processing unit. Taking note of the fact that only single errors are stipulated, in such an event, the residue generated quotient is obviously the correct quotient. Further, taking into consideration the relative complexity of the main processing unit (as opposed to the tandem residue checking unit), it is more likely that the error will occur in the main processing unit than in the residue checking unit. Upon receipt of an error indication, it may be, as it heretofore has been, necessary to halt the operation of the computer to locate the source of the error and correct it. This course of action is open to a number of objections, the chief of which is that it ties up the computer equipment while a particular unit is being repaired. The fact that the tandem residue unit of this invention has generated a correct residue of what the quotient should have been does not necessitate that the computer operation be interrupted. Thus, it may in some instances be more desirable to let the computer continue to operate with the erroneous quotient, provided that an indication of such a fact is made. For this purpose, it is quite advantageous to utilize the residue quotient furnished by the tandem checking unit as flag signal that will be associated with the erroneous quotient. Such a flag signal, which may comprise several bits, may serve a number of important purposes. For example, flag bits may be transmitted along with the erroneous quotient into memory storage, thereby providing valuable clues to a technician who may be attempting to determine the location of the error by noting which quantity is erroneous. The availability of a correct signal representing what the quotient should have been improves what is-known as the limp-along feature of a computer. In effect, it allows postponement of error detection and error correction operations because the erroneous quantities are permanently tagged with their flag bits. It is obvious that, when a division operation of a computer is to be monitored by residue checking means in accordatnce with the above invention, it may sometimes result that the residue of the divisor is 0\ For example, the residue representation to the base 3 of the number 6, a possible divisor in a given division process, would be a 0. Since residue division by 0 is not defined (alike to the difficultie's in the natural number domain), there are provided, according to this invention, auxiliary means for treating such a possibility. In the event that the residue of a divisor is 0, additional circuit means are responsive to such an indication and these additional circuit means will effect a comparison between the residue of the divisor, on the one hand, and the residue difference between the residue of the dividend and the residue of the remainder, on the other hand. A correct division will, in this case, be indicated by equality of the two quantities compared. That is, if the comparison means indicate that, when the residue of the divisor is 0, the residue difference between the dividend and the remainder is also 0, the correctness of the operation performed by the main processing unit has been established. The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of a preferred embodiment of the invention, as illustrated in the accompanying drawings. FIG. 1 is -a schematic diagram of an illustrative example of a residue error checking system according to the invention cooperating with a main processing unit of a digital computer. FIG. 2 is a truth table which establishes the corresponding output conditions for respective input condi-- tions of a residue subtractor to the base 3 according tothe invention. FIG. 3 is a truth table which establishes the corresponding outpu-t conditions for respective input conditions of a residue divider to the base 3 accordi g to the invention. FIG. 4 is an illustrative embodiment of a device for performing residue subtraction to the base 3 according to the invention. FIG. 5 is an illustrative embodiment of a device for performing residue division to the base 3 according to the invention. FIG. 6 is an illustrative embodiment of a comparator according to the invention. FIG. 7 is an illustrative embodiment of means to indicate a comparison when the residue of the divisor is zero. FIG. 8 is an illustrative embodiment of a translator for translating from a binary code to a residue code to the base 3-. FIG. 8a is an illustrative embodiment of a residue generator for use in a residue translator. Reference may now be had to FIG. 1 which discloses a main unit 10 of a digital computer that is adapted to perform all types of arithmetic operations, including, for example, division. To illustrate the operation of the invention with respect to checking the division process as performed by the main processing unit 10, there are shown two input lines 13 and 15, each line respectively transmitting digital signals representing the divisor B and the dividend A Although lines 13, 15 are shown as single lines for purposes of illustration, they may comprise a plurality of lines to provide a path for the signals, as for example when the main processing unit 10 operates in a parallel mode. It is immaterial for the present whether the main processing unit 10 operates in a serial, parallel, or mixed serial-parallel, mode. Similarly, the coding for the signals on lines 13, 15 may be any suitable digital code, for example the well-known binary code. Main processing unit 10 generates a result in response to the division operation which will be represented on lines 17, 19; namely the generated binary quotient Q and the generated binary remainder, R will appear on lines 17, 19 respectively. For the purposes of the present invention it is immaterial in what particular manner main processing unit 10 generates the results Q, and R in response to the entering signals A B Translators 23, 25 translate the binary signals A and 13;, into their congruent residues, while translators 27, 29 translate the binary signals Q R into their congruent residues. A translator of the type used will be described more fully hereinafter. For the present purpose it is suflicient to note that these translators form a congruent residue, or abstract, of the signals appearing on their respective input lines. Signal lines 33, 35 transmit the residue representations B and A of the respective binary divisor B and the binary dividend A Lines 33 and 35 each provide an input to the residue checking unit which comprises a residue subtractor 24 cooperating with a residue divider 26. Translators 27, 29 also transmit on their respective output lines 37, 39, the residue representations of the quotient Q, and the remainder R as generated by the main processing unit 10. The residue representation of the remainder, R,, is transmitted on line 39 and provides a second input to residue subtractor 24. The exact details of residue subtractor 24 will be described more fully below, but at the present time it is sufficient to note that residue subtractor 24 will provide an output signal on line 21 which represents the residue difference between the quantities A, and R,. Line 21 transmits the residue difierence, A,-R,, to residue divider 26, which also accepts the residue divisor B, from line 33. The exact nature of residue divider 26 will be described more fully below. For the present purposes it is sufiicient to note that residue divider 26 performs a residue division and thereby produces, on line 47 a signal representing a residue generated quotient Q'r- Comparison unit 28 compares the signal Q with the signal Q appearing on line 37. Line 37 carries a signal which represents the output of translator 27 which translates the binary signal O to its congruent residue Q,. The comparison unit 28 will be more fully described below but for the present purposes it is sufiicient to note that it will compare Q and Q for identity. It the two quantities compared are identical, a NO ERROR signal appears on line 67, thereby indicating that the division process performed by the main processing unit 10 is correct. In other words, identity between two substantially independently generated results gives a good assurance that the division operation has been without error. In such a case, the residue generated quotient, Q,, is available as a group of checking bits to detect for errors in the transmission of the computer generated quotient Q If the comparison unit 28 indicates that the quantities Q and Q are unlike, it is likely that an error has occurred, and therefore an error signal is provided on line 57. As previously noted, it is quite likely that the error has occurred in the main processing unit 10. This means that a faulty Q, has been generated. Thus, as between the two quantities Q and Q which are compared, O is incorrect and Q (on the assumption that only single errors have occurred) is correct. As previously noted, the correct residue quotient, Q may be valuable as a group of flag bits to indicate that the computer generated quotient Q, is in error. For example, the residue quotient may be transmitted along with the computer generated quotient into memory storage Where it can later be checked for identity to diagnose the error in the machine. The indication of which quantity stored in memory is erroneous, gives valuable clues in this regard. It should be noted that the principle of this invention is applicable to any base, or modulus. As a practical matter, however, a judicious choice must be made in any given case by suitably balancing the considerations of increasing complexity of equipment as the base increases with the simplicity, but corresponding lesser information carrying capability, that goes with a smaller base. That is, it may be desirable to work with a base of 7, or 5, or 3. For purposes of clarity and ease of understanding this invention, the subsequent discussion will be confined to an illustrative embodiment employing the base 3. However, this is to be considered in no way as limiting the invention to this base. Referring now more particularly to FIG. 2, there is shown a truth table for residue subtraction, according to the base 3. A circle drawn around the conventional minus sign indicates a residue operation. When working with a base of 3, the possible residues, by definition, are 0, 1, and 2. Thus, two quantities, X and Y, between which it is desired to effect a residue subtraction, can each be represented by any one of the three previously mentioned residues. The correspondence of the results, as indicated by the truth table, with the similar operation in the natural number domain, will now be indicated. As a representative example, assume that it is desired to subtract 14 from 18. The residue of 18 is, according to the rules previously mentioned, 0. The residue of the number 14 is 2. The residue of the result, namely 4, is 1. This is precisely what is indicated by the truth table in FIG. 2, wherein if the minuend X is 0, and the subtrahend Y is 2, the result will be a residue of 1. Other examples can be used to show the validity of the truth table in FIG. 2. Referring now to FIG. 4, there is disclosed a residue subtractor 24 operating in accordance with the truth table established in FIG. 2. Signal lines 71, 73 represent the residue of the dividend, A Since the residue of the dividend can be only one of three possibilities, i.e. 1, 2, or 0, two signal lines are suflicient to represent these three possibilities. Thus, A would be represented on line 71, 73 at any given instant, by the combination of input signals appearing thereon. For example, the residue of A would be a 0 when line 71 is UP, and line 73 is also UP. For purposes of definition, a line is said to be UP, when the signal thereon resides at one determined voltage, or current, level. A line is said to be DOWN, when the signal impressed thereon resides at another determined voltage, or current, level. According to these definitions, the signal A appearing on lines 71, 73, can be defined according to the following illustrative coding scheme: Line \73 UP DOWN 71\ DOWN 1 Line $77 UP DOWN UP 0 2 DOWN i 1 A description will now illustrate the operation of residue subtractor 24. Assume that, in accordance with the previous example, the residue of the divided, A is 0, and that the residue of the remainder, R,, is 2. This would be represented, in accordance with the coding scheme established above, by lines 71, 73 being UP, and lines 75, 77 being UP and DOWN, respectively. The UP signal on line 71, which by-passes inverter 72, will provide an UP signal to AND gates 81, 82 and 86. The UP signal on line 71 which is inverted by inverter 72 will not serve to condition any AND gates. The UP signal on line 73, which by-passes inverter 74, will serve to provide an UP signal to AND gates 80, and 85. The UP signal on line 73 which is inverted by inverter 74 will not serve to condition any AND gates. The UP signal on line 75, which by-passes inverter 76, will provide an UP signal to AND gates 81, 82, 83 and 86. The UP signal on line 75, which is inverted by inverter 76, will not serve to condition any AND gates. The DOWN signal on line 77, which by-passes inverter 78, will not serve to condition any AND gates. The DOWN signal on line 77, which is inverted by inverter 78, will provide an UP signal to AND gates 83 and 86. A review of which AND gates 80-87 have been activated in response to the signals on lines 71, 73, 75 and 77, will show that of all of the AND gates 80-87, only AND gate 86 has received all the necessary UP signals to be activated. That is, none of the other AND gates will be activated; only AND gate 86 will be fired. The resultant output of residue subtractor 24 will, therefore, be indicated by the state of activation of OR circuits 88, 89. As previously noted, none of the AND gates 81-83 were activated by the input signals to residue subtractor 24. Therefore, OR gate 88 is not provided with a signal, and, therefore, its output on line 90 will be DOWN. on the other hand, AND gate 86, as previously noted, has been activated by the input signals to residue subtractor 24 and the output from AND gate 86 activates OR gate 89, which results in an UP signal on line 91. Therefore, the output of residue subtractor 24 is represented by a DOWN signal on line 90, and an UP signal on line 91. The coding scheme for the output lines 90, 91 of residue subtractor 24, is given below: Line 91 UP DOWN 90 DOWN 1 Reference to this coding scheme and to the example just discussed, will show that residue subtractor 24 has indicated a residue of 1, when the residue of the minuend was a O, and the residue of the subtrahend was a 2. This is in conformance with the truth table established for such a residue subtractor in FIG. 2. Reference may now be had to FIG. 3, which discloses a truth table for residue division to the base 3. Since the residue dividend X is representable by any one of the residues, -1, 2, 0, but the residue divisor is only meaningful when it is either -1, or 2, FIG. 3 is actually a 2 by 3 truth table, whereas FIG. 2, which is the truth table for residue subtraction, is a 3 by 3 truth table. This distinction between FIGS. 2 and 3 illustrates the well known difiiculties of residue division, when the residue of the divisor is 0. That is, since residue division when the residue of the divisor is 0, is not defined, no attempt is made to include it in a truth table, such as shown in FIG. 3. A representative example will now be discussed which shows the conformance of the truth table in FIG. 3 to an actual division process performed in the natural number domain. Suppose that it is desired to divide 55 by 5. The quotient, namely 11, is an integral one, and, therefore, the corresponding residue process will bear a valid oneto-one correspondence with the division process in the natural number domain. The residue of the dividend 55, to the base 3, is l. The residue of the divisor 5, to the base 3, is 2. According to the truth table shown in FIG. 3, when the residue of the dividend X is 1, and the residue of the divisor Y, is 2, the result predicted by the truth table in FIG. 3, will be a residue of 2. Relating this to our example in the natural number domain, it can be seen that the quotient, namely 11, has a residue of 2, to the base 3. Thus, the residue of the result, namely 2, is identical to the resultant residue, namely 2, as predicted by the truth table in FIG. 3. Other examples can show the validity of the truth table in FIG. 3. Reference may now be had to FIG. 5, which discloses residue circuitry operating in accordance with the truth table shown in FIG. 3. FIG. 5 broadly comprises a residue divider unit 26 and a gating unit 22. Residue divider 26 accepts inputs on lines 90, 91 which transmit the residue difference, A R,. Residue divider 26 further accepts inputs on lines 93, 95 which transmit the residue of the divisor B Assume that, in accordance with the examples previously discussed, the residue difference, A,- R,, is 1; this would be represented by line having a DOWN signal impressed thereon, and line 91 having an UP signal impressed thereon. (Lines 90, 91 transmit the output of residue subtractor 24, as shown in FIG. 4). Similarly, if we assume the residue of the divisor, B,, to be 2, a coding scheme can be established for lines 93, which transmit the residue representation of the divisor, B,.. Such a coding scheme is given below: DOWN 1 In accordance with coding scheme for lines 93, 95, when the divisor B is 2, this would be represented by line 93 being UP, and line 95 being DOWN. The DOWN signal on line 90 is inverted by inverter 101 to provide an UP signal to AND gates 105, 111. The UP signal on line 91 is inverted by inverter 103, which provides a DOWN signal to AND gates 107, 109. The UP signal on line 93 is transmitted by line 93 to AND gates 105, 109. The DOWN signal on line 95 is transmitted by line 95 to AND gates 107, 111. A review of which of AND gates 105, 107, 109, 111 have received all UP signals on their inputs, shows that of all the AND gates mentioned, only AND gate has received all of its necessary UP signals. This means that only AND gate 105 will provide an UP signal to OR gate 113. OR gate 115 will not be fired and therefore will remain inactive, or DOWN. The output of residue divider unit 26 will therefore be an UP signal on line 117, and a DOWN signal on line 119. For con- 9 venience, the output conditions on line 117, 119 can be classified in accordance with a coding scheme as used previously, to designate the residue equivalent generated by residue divider unit 26 in response to the division upon the quantity A,R, by B,. Such a coding scheme is given below. Line 119 UP DOWN 117\ DOWN 1 The output of residue divider unit 26 is provided to a gating network 22, which comprises EXCLUSIVE OR circuit 120 and AND gates 122, 124. An EXCLUSIVE OR circuit is defined in the prior art as a circuit which provides an output, only when its inputs are not alike. This means, in the context of this invention, that EXCLUSIVE OR circuit 120 will provide an UP signal on line 121 at all times, except when lines 93, 95 are simultaneously UP. (The condition of lines 93, 95 being simultaneously DOWN is not a case defined, in the context of this invention.) The significance of the condition of lines 93, 95 being simultaneously UP will be described below. For the present purposes, it is sufiicient to note that, as long as lines 93, 95 are UP, and DOWN, respectively, or viceversa, line 121 will provide an UP signal to AND gates 122 and 124, thereby activating AND gates 122 and 124 to be fired by signals on lines 117, and 119, which represent the output of residue divider unit 26. That is, if AND gates 122 and 124 receive an UP signal on line 121, an UP signal on line 117 will result in AND gate 122 being activated to provide an UP signal on its output line 125; similarly, a DOWN signal on line 119 will not activate AND gate 124, which results in a DOWN signal on line 127. To summarize, therefore, AND gates 122, 124 function to gate the outputs of OR gates 113, 115 in response to the control signal generated by EXCLUSIVE OR circuit 120 on line 121. That is, an UP signal on line 121 assures that the outputs on lines 125, 127 will be identical to the outputs of residue divider unit 26, on lines 117, 119. Reference may now be had to FIG. 6 which shows a comparator 28 which functions to compare the gated output of residue divider 26, as it appears on lines 125, 127, with the residue of the computer generated quotient Q,. The input to the comparator 28, on lines 125, 127 represents the residue generated quotient, Q',. In line with the previous examples, the output of residue divider unit 26 is represented by an UP signal on line 125, and a DOWN signal on line 127. To illustrate the operation of comparator 28, let it be assumed that the residue of the computer generated quotient Q is represented by an UP signal on line 131 and DOWN signal on line 133. The residues of Q, are, by definition, 0, 1, or 2. This can be represented on lines 131, 133 in accordance with the coding scheme given below: Line \133 UP DOWN 131\ DOWN 1 As previously described, an EXCLUSIVE OR circuit will have a signal on its output only when the input signals are unlike. Conversely, identity of input signals to an EXCLUSIVE OR circuit will not produce an output, i.e. will produce a DOWN output. In the context of this invention, this means that EXCLUSIVE OR circuits 135, 137 will not produce any output signals when their input signals are alike. That is, if the signals on line 125, and line 131, are simultaneously UP (or DOWN), EXCLU- SIVE OR circuit 135 will produce a DOWN signal on its output. Similarly, when lines 127 and 133 are simultaneously UP (or DOWN), EXCLUSIVE OR circuit 137 will also produce a DOWN signal on its output. If neither of EXCLUSIVE OR circuits 135, 137 have produced an UP signal, OR circuit 139 produces a DOWN signal which is inverted by inverter 141 to provide an UP signal to OR gate 143. OR gate 143 in turn, produces an UP signal on its output 67, which is a CONTINUE signal, thereby indicating that the comparison between Q',, as represented on lines 125, 127, and Q,, as represented on lines 131, 133 has proven successful, i.e. they are alike. Assume that, for some reason, the signals on lines 125, 127 do not correspond with the signals on line 131, 133. This is tantamount to saying that Q,, as generated by the residue divider unit 26, and Q,, do not agree and that, therefore, an error has occurred. Nonidentity of input signals to either one of EXCLUSIVE OR circuits 135, 137 will produce from one of them an UP signal which is transmitted by OR circuit 139 to inverter 141. Inverter 141 inverts the UP signals thus produced, to a DOWN signal, which will not activate OR circuit 143. Therefore, the output of OR circuit 143 will be a DOWN signal on line 67. This means that an error has occurred, because the required CONTINUE signal is not provided on line 67. In other words, an error is indicated by the absence of a CONTINUE signal. If desirable, it is possible to feed line 67 into an inverter the output of which will indicate an error. In other words, the ERROR signal can be derived from the CONTINUE signal by merely passing it through an inverter. As previously noted, residue division is not defined (i.e. it is not meaningful) when the residue of the divisor is 0. In such a case, i.e. when the residue of the divisor B, is 0, the output of residue divider 26 is meaningless. It therefore doesnt make much sense to compare the output of residue divider 26 with the residue of the quotient generated by the computer, namely Q,. It is for this reason that the gating network 22 functions to gate the output of residue divider 26 into comparator 28, only when the residue of the divisor B, is not 0. If the residue of the divisor, namely B,, is 0, this would be represented, in accordance with the coding scheme previously outlined, by lines 93, being simultaneously UP. In such a a case, EXCLUSIVE OR circuit will provide a DOWN signal on line 121, thereby blocking AND gates 122 and 124. This means that the signals on lines 125, 127 will be DOWN simultaneously. A reference to the coding scheme for lines 125, 127 will show that the condition of both lines being DOWN simultaneously, is not defined. That is, the condition of lines 125, 127 being DOWN simultaneously does not represent one of the three possible residues 1, 2, or 0. Obviously, when lines 125, 127 are simultaneously DOWN (which in the terms of this invention doesnt mean anything) it makes no sense to compare these signals with the signals representing the residue Q of the computer generated quotient. If the residue Q, of the computer generated quotient is either 1, 2, or 0, it follows that at least one of EXCLUSIVE OR circuits 135, 137 will provide an UP signal which will be transmitted by means of OR gate 139 to inverter 141 which in turn inverts the UP signal so that it appears as a DOWN signal that is unable to activate OR circuit 143 and thereby indicate a CONTINUE signal; In the event that the residue B, of the divisor is 0, it is clear then that comparator 28 will not compare the residue Q, of the quotient generated by the computer with the output signals of residue divider unit 26. In such an event, the comparison occurs, in a manner described below. Reference may now be had to FIG. 7 which discloses circuit means that may be utilized to afford a check on the correctness of the computer operation when the residue of the divisor is 0. Reference to the familiar division algorithm will indicate that an equality exists between the difierence of the dividend and the remainder, or the one hand, and the product of the divisor and the quotient, on the other hand. This equality holds in the natural number domain, as well as within the residue code. Therefore, it follows that, when the residue of the divisor is O, the residue difference between the dividend and the remainder will also be 0, if no errors are present. In accordance with the coding scheme previously estab lished, lines 90, 91 will be UP simultaneously when the residue difference, A R is 0. Similarly, when the residue B of the divisor is 0, lines 93, 95 will be UP simultaneously. Thus, the AND circuit which is illustrated in FIG. 7, and which is responsive to signals on lines 90, 91, 93 and 95, will provide an UP signal on its output line 150, when all of its input lines (90, 91, 93 and 95) are UP simultaneously. In effect, then, the comparison for identity of B with A, I has been made by AND circuit 149, the output of which is provided to OR gate 143 by means of line 150 so that the required CONTINUE indication is given on line 67. Now it can be seen that if AND gate 149 fails to yield an UP signal (which would mean that not all of lines 90, 91, 95, 93 are simultaneously UP), the proper CONTINUE signal will not be given. Instead, line 67 will be DOWN, thereby indicating an error. To summarize the comparison process then, it is clear that in the instance when the residue B of the divisor is not 0, a regular comparison will be effected by comparator 28 in the manner as described above. If the residue of the divisor is O, the correctness of the computer operation can still be indicated by comparing it with the residue difference A,-R,, which in that case must also be 0. If both A -R and B are 0, AND gate 149 will provide an UP signal which is transmitted to OR gate 143, thereby giving the proper UP signal on line 67 to indicate that the computer can continue. Reference may now be had to FIG. 8 which shows an illustrative embodiment of a translator from binary code to a residue code to the base 3. There is shown a source of binary bits, such as a binary register 160. Register 160 may be used to furnish the computer with the necessary operands for a division operation, and it may also store the resultants after the operation is completed. Translator 27 is shown to cooperate with register 160 and translates the magnitude represented by the bits stored in register 160 to the congruent residue to the base 3, thereby providing residue unit 20 (FIG. 1) with the proper residues of the operands and resultants of the division operation. Translator 27 includes a plurality of identical residue generators 162a-16212, one such generator being provided for every two bit positions of register 160; if the number of bits stored in register 160 is an odd number, the last residue generator (16211) would receive one of its inputs (16412) from the last bit and would have its other input (16611) connected to a constant signal level representing a zero bit. The first residue generator (162a) has two of its input lines (168a, 170a) connected to a constant UP signal level, representing a one bit. The arrangement of residue generators 162a-16211 in translator 27 is based on the mathematical equivalence of the residue of a binary number to the summation over the number of bits of: the product of the residue of the weight of a particular bit multiplied by the value of that particular bit. The successive residue generators (162a-16211) carry out a summation process which produces the residue of the binary number stored in register 160 on the output of the last residue generator 16211. Reference to FIG. 8a shows an individual residue generator 162 while may be used in a translator 27. Capital letters E, F, G and H are used to denote the inputs to residue generator 162 and the connection of the inputs to the individual AND gates 180189. A particular one of AND gates 180-189 will provide an UP signal output only when the variables on its inputs are simultaneously UP. Depending upon which one of AND gates 180189 is fired, one of OR gates 190, 191 will be fired to provide an UP signal on respective lines 172, 174, thereby indicating the output of residue generator 162. Consider, as a representative example, that the binary number 0100 is stored in register 160. (This is the binary representation of the number 4, the residue of which, to the base 3, is a one.) In response to the stored binary number, residue generator 162a is provided with respective UP signals on lines 166a, 168a, 170a, and a DOWN signal on line 164a. Reference to FIG. 8a will show that this particular combination of signals energizes only AND gate 186 which results in an UP signal from OR gate 191 on line 174. Line 172 will remain DOWN as OR gate will not be fired by the particular combination of input signals. The output of the residue generator 162a is thus a DOWN signal on line 172 and an UP signal on line 174. This output is provided to the next residue generator 162b, in addition to DOWN signals on lines 1641), 1661) from register 160. Reference again to FIG. 8a shows that the particular input conditions to residue generator 16% result in the firing of only AND gate 185. Therefore, the output of residue generator 162b is an UP signal on line 174 and a DOWN signal on line 172. Since no more bits are stored in a register 160, the output of residue generator 162]) represents the output of the last residue generator 16211 for the particular case where 11 equals b. Thus, the output of translator 27, is, in effect, represented by a DOWN signal on line 176 (line 172 of residue generator 16212) and an UP signal on line 178 (line 164 of residue generator 16212). The coding scheme adopted on lines 176, 17 8 to represent the output of translator 27 is identical to the one previously employed for the residue subtractor 24 and residue divider 26. Thus, the coding scheme is: Line \178 UP DOWN 176 In view of this coding scheme, it is evident that the DOWN signal on line 176 and the UP signal on line 178 represent a residue one, which is the proper residue, to the base 3, of the binary number 0100, which was assumed to be stored in register 160. While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. I claim: 1. A residue error checking system operable with a digital computer adapted to produce signals representing the residues of a quotient, a remainder, a dividend, and a divisor, as a result of the performance of a division operation by said computer, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; a second circuit means responsive to the signals representing the residue of said divisor and the output of said first circuit means for performing a residue division of said residue difference by the residue of said divisor, whereby said second circuit means produces an output signal representing a residue quotient; and comparison means for comparing said residue quotient generated by said second circuit means with the residue of the quotient generated by said computer, whereby, if said comparison means indicates identity between the two residue quotients compared, the correctness of the division operation performed by said computer is verified. 2. A residue system for producing a residue equivalent of the quotient produced by a digital computer adapted to also produce signals representing the residue of a remainder, a dividend, and a divisor, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; a second circuit means responsive to the signals representing the residue of said divisor and the output of said first circuit means for performing a residue division of said residue difference by the residue of said divisor, whereby said second circuit means produces an output signal representing a residue quotient. 3. A residue error checking system operable with a computer adapted to produce signals representing each of the unique values of the residues of a quotient, a remainder, a dividend, and a divisor, as a result of the performance of a division operation by said computer, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; a second circuit means responsive to the output of said first circuit means and to the signals representing all non-zero values of the residue of said divisor, for performing a residue division of said residue difference by the residue of said divisor; and comparison means including circuit means for comparing the output of said second circuit means with the residue of the quotient generated by said computer and circuit means responsive to a signal representing a zero value of the residue of said divisor for comparing the output of said first circuit means with the residue of said divisor, whereby an equal comparison output by said comparison means verifies the correctness of the division operation performed by said computer. 4. A residue error checking system operable with a computer adapted to produce signals representing each of the unique values of the residues of a quotient, a remainder, a dividend, and a divisor, as a result of the performance of a division operation by said computer, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; second circuit means responsive to the signals representing the values of the residue of said divisor for producing a gating signal for all non-zero values of the residue of said divisor; a third circuit means responsive to the signals representing the residue of said divisor and the output of said first circuit means for performing a residue division of said residue difference by the residue of said divisor; a fourth circuit means connected to the output of said third circuit means and responsive to said gating signal for gating the output of said fourth circuit means only when said gating signal is present; and comparison means including circuit means for comparing the gated output of said fourth circuit means with the residue of the quotient generated by said computer and circuit means responsive to a signal representing a zero value of the residue of said divisor for comparing the output of said first circuit means with the residue of said divisor, whereby an equal comparison output by said comparison means verifies the correctness of the division operation performed by said computer. 5. A residue error checking system cooperating with a digital computer adapted to produce signals representing each of the possible values of the residues of a quotient, a remainder, a dividend, and a divisor, as a result of the performance of a division operation by said computer, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; second circuit means responsive to the signals representing the values of the residue of said divisor for producing a gating signal for all non-zero values of the residue of said divisor; a third circuit means, responsive to the presence of said gating signal and the signals representing the residue of said divisor and the output of said first circuit means, for performing a residue division of said residue difference by the residue of said divisor only when said gating signal is present; and comparison means responsive to said gating signal for comparing the output of said third circuit means with the residue of the quotient generated by said computer when said gating signal is present, and for comparing the output of said first circuit means with the residue of said divisor, when said gating signal is absent, whereby an equal comparison output by said comparison means verifies the correctness of the division operation performed by said computer. 6. A residue error checking system cooperating with a digital computer adapted to produce signals representing each of the possible values of the residues of a quotient, a remainder, a dividend, and a divisor, as a result of the performance of a division operation by said computer, comprising: a first circuit means responsive to the signals representing the residues of said dividend and said remainder for producing an output signal representing the residue difference between the residue of said dividend and the residue of said remainder; second circuit means responsive to the signals representing the values of the residue of said divisor for pro ducing a gating signal for all non-zero values of the residue of said divisor; a third circuit means, responsive to the presence of said gating signal and the signals representing the residue of said divisor and the output of said first circuit means, for performing a residue division of said residue difference by the residue of said divisor only when said gating signal is present; and comparison means including circuit means responsive to the presence of said gating signal for comparing the output of said third circuit means with the residue of the quotient generated by said computer when said gating signal is present, and including circuit means responsive to the absence of said gating signal for comparing the output of said first circuit means with the signal representing the value of the residue of said divisor when said gating signal is absent, whereby an equal comparison output by said comparison means verifies the correctness of the division operation performed by said computer. References Cited by the Examiner UNITED STATES PATENTS 2,936,116 5/ 1960 Adamson et a1 235-173 X ROBERT C. BAILEY, Primary Examiner. MALCOLM A- MORR SON, Examiner. Patent Citations Cited PatentFiling datePublication dateApplicantTitle US2936116 *Nov 12, 1952May 10, 1960Hnghes Aircraft CompanyElectronic digital computer Referenced by Citing PatentFiling datePublication dateApplicantTitle US3296452 *Sep 16, 1963Jan 3, 1967Westinghouse Electric CorpLoad regulation US4555784 *Mar 5, 1984Nov 26, 1985Ampex CorporationParity and syndrome generation for error detection and correction in digital communication systems US4597083 *Apr 6, 1984Jun 24, 1986Ampex CorporationError detection and correction in digital communication systems US4769780 *Feb 10, 1986Sep 6, 1988International Business Machines CorporationHigh speed multiplier US4926374 *Nov 23, 1988May 15, 1990International Business Machines CorporationResidue checking apparatus for detecting errors in add, subtract, multiply, divide and square root operations EP0374420A2 *Oct 21, 1989Jun 27, 1990International Business Machines CorporationA residue checking apparatus for detecting errors in add, substract, multiply, divide and square root operations Classifications U.S. Classification708/532, 714/808, 714/E11.33 International ClassificationG06F11/10 Cooperative ClassificationG06F11/104 European ClassificationG06F11/10M1W
__label__pos
0.771865
Commit 065efd4b authored by GILLES Sebastien's avatar GILLES Sebastien #1480 Remove a bit of code that harmed hyperelastic case (and which was the... #1480 Remove a bit of code that harmed hyperelastic case (and which was the reason anyway I took the time to introduce a proper Directory class). parent 15d4c45d ......@@ -116,51 +116,4 @@ namespace MoReFEM::Internal::MoReFEMDataNS } void AskResultDirectoryRemoval(const std::string& directory, overwrite_directory do_overwrite_directory) { if (FilesystemNS::Folder::DoExist(directory)) { switch (do_overwrite_directory) { case overwrite_directory::no: { std::string answer; while (answer != "y" && answer != "n") { do { if (!std::cin) { std::cin.clear(); // clear the states of std::cin, putting it back to `goodbit`. std::cin.ignore(10000, '\n'); // clean-up what might remain in std::cin before using it again. } std::cout << "Directory '" << directory << "' already exists. Do you want to remove it? " "[y/n]" << std::endl; std::cin >> answer; } while (!std::cin); } if (answer == "n") { std::cout << "The program will therefore exit here." << std::endl; throw ExceptionNS::GracefulExit(__FILE__, __LINE__); } break; } case overwrite_directory::yes: { std::cout << "Removing pre-existing directory " << directory << " before recreating it." << std::endl; } break; } FilesystemNS::Folder::Remove(directory, __FILE__, __LINE__); } } } // namespace MoReFEM::Internal::MoReFEMDataNS ......@@ -121,15 +121,6 @@ namespace MoReFEM::Internal::MoReFEMDataNS const std::string& input_data_file); /*! * \brief If \a directory already exists, ask the user whereas he wants to remove it or stop the program. * * \param[in] directory Directory which existence is checked. * \param[in] do_overwrite_directory If 'yes', the former directory is removed silently. */ void AskResultDirectoryRemoval(const std::string& directory, overwrite_directory do_overwrite_directory); } // namespace MoReFEM::Internal::MoReFEMDataNS ...... ......@@ -69,10 +69,13 @@ namespace MoReFEM::Internal * \brief Constructor. * * \copydoc doxygen_hide_input_data_arg * \param[in] behaviour Behaviour to use when the subdirectory to create already exist. Irrelevant for policies * that only read existing directories. */ template<class InputDataT> explicit Parallelism(const ::MoReFEM::Wrappers::Mpi& mpi, const InputDataT& input_data); const InputDataT& input_data, ::MoReFEM::FilesystemNS::behaviour behaviour); //! Destructor. ~Parallelism() = default; ...... ......@@ -18,7 +18,8 @@ namespace MoReFEM::Internal template<class InputDataT> Parallelism::Parallelism(const ::MoReFEM::Wrappers::Mpi& mpi, const InputDataT& input_data) const InputDataT& input_data, ::MoReFEM::FilesystemNS::behaviour behaviour) { namespace ipl = Utilities::InputDataNS; ......@@ -50,7 +51,7 @@ namespace MoReFEM::Internal directory_ = std::make_unique<FilesystemNS::Directory>(mpi, path, FilesystemNS::behaviour::ask, behaviour, __FILE__, __LINE__); break; ...... ......@@ -59,10 +59,7 @@ namespace MoReFEM mpi, DoTrackUnusedFieldsT); // Parallelism is an optional field: it might not be present in the Lua file (for tests for instance it is not // meaningful). if constexpr (InputDataT::template Find<InputDataNS::Parallelism>()) parallelism_ = std::make_unique<Internal::Parallelism>(mpi, *input_data_); namespace ipl = Utilities::InputDataNS; using Result = InputDataNS::Result; ......@@ -85,39 +82,11 @@ namespace MoReFEM path, directory_behaviour, __FILE__, __LINE__); } // We first deal with the data that are written only in the 'main' folder. if (mpi.IsRootProcessor()) { if constexpr(ProgramTypeT == program_type::model) { // Internal::MoReFEMDataNS::AskResultDirectoryRemoval(result_directory_, do_overwrite_directory); if constexpr (InputDataT::template Find<InputDataNS::Parallelism>()) Internal::MoReFEMDataNS::AskResultDirectoryRemoval(GetParallelism().GetDirectory(), do_overwrite_directory); } else if constexpr(ProgramTypeT == program_type::test) { // if (FilesystemNS::Folder::DoExist(result_directory_)) // { // std::cout << "Removing pre-existing directory " << result_directory_ << " before " // "recreating it." << std::endl; // FilesystemNS::Folder::Remove(result_directory_, __FILE__, __LINE__); // } if constexpr (InputDataT::template Find<InputDataNS::Parallelism>()) { const auto parallelism_directory = GetParallelism().GetDirectory(); if (FilesystemNS::Folder::DoExist(parallelism_directory)) { std::cout << "Removing pre-existing directory " << parallelism_directory << " before " "recreating it." << std::endl; FilesystemNS::Folder::Remove(parallelism_directory, __FILE__, __LINE__); } } } // Parallelism is an optional field: it might not be present in the Lua file (for tests for instance it is not // meaningful). if constexpr (InputDataT::template Find<InputDataNS::Parallelism>()) parallelism_ = std::make_unique<Internal::Parallelism>(mpi, *input_data_, directory_behaviour); } mpi.Barrier(); ...... ......@@ -536,7 +536,7 @@ Parallelism = { -- If Policy is 'RunFromPreprocessed', path to the directory which contains the pre-processed data. -- Expected format: "VALUE" directory = '' directory = '${MOREFEM_RESULT_DIR}/MidpointHyperelasticity_Parallelism' } -- Parallelism ...... ......@@ -463,7 +463,7 @@ Parallelism = { -- If Policy is 'RunFromPreprocessed', path to the directory which contains the pre-processed data. -- Expected format: "VALUE" directory = '' directory = '${MOREFEM_RESULT_DIR}/MidpointHyperelasticity_Parallelism/${MOREFEM_START_TIME}' } -- Parallelism ...... ......@@ -24,7 +24,12 @@ int main(int argc, char** argv) //! \copydoc doxygen_hide_model_specific_input_data using InputData = MidpointHyperelasticityNS::InputData; //TODO: // - Solve the test failure // - CHeck overwrite option // - Check ask in parallel try { MoReFEMData<InputData, program_type::model> morefem_data(argc, argv); ...... Markdown is supported 0% or You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first! Please register or to comment
__label__pos
0.983532
How to tell if a file or a folder could be placed into the Recycle Bin? CodeGuru Home VC++ / MFC / C++ .NET / C# Visual Basic VB Forums Developer.com Results 1 to 12 of 12 Thread: How to tell if a file or a folder could be placed into the Recycle Bin? 1. #1 Join Date Nov 2003 Location Portland, OR Posts 883 How to tell if a file or a folder could be placed into the Recycle Bin? Say, I have the "K:\test del USB" folder. Then I do the following: Code: SHFILEOPSTRUCT sfo = {0}; sfo.wFunc = FO_DELETE; sfo.pFrom = L"K:\\test del USB\0"; sfo.fFlags = FOF_ALLOWUNDO | FOF_SILENT | /*FOF_NOCONFIRMATION |*/ FOF_NOERRORUI | FOF_NOCONFIRMMKDIR | FOF_WANTNUKEWARNING; int res = SHFileOperation(&sfo); BOOL bFullSuccess = res == 0 && !sfo.fAnyOperationsAborted; So when I run it, the SHFileOperation API shows this warning: Are you sure you want to permanently delete this folder? Name: 1.png Views: 854 Size: 16.5 KB If the end-user clicks "No", SHFileOperation return 0x4c7, which I believe is ERROR_CANCELLED. My question is, if I don't need any UI, how can I know that my file/folder will be permanently deleted vs. placed into the Recycle Bin? 2. #2 Arjay's Avatar Arjay is offline Moderator / MS MVP Power Poster Join Date Aug 2004 Posts 12,412 Re: How to tell if a file or a folder could be placed into the Recycle Bin? The way I read the documentation is that if a user clicks "No", the operation is aborted and the file(s) aren't permanently deleted or placed in the recycle bin (you can check the fAnyOperationsAborted flag). If you need to permanently delete the file (as msdn says), you must pass in a full path to the file to be deleted and make sure the FOF_ALLOWUNDO flag isn't set or use DeleteFile. 3. #3 Join Date Nov 2003 Location Portland, OR Posts 883 Re: How to tell if a file or a folder could be placed into the Recycle Bin? No, I don't want to delete it permanently. Quite opposite, I need to put that folder in the Recycle Bin. My issue is knowing that it will be actually put into the bin as opposed to permanently deleted. As I said above, this actually works when I enable the Explorer's UI. I need it to work without the UI though. 4. #4 Arjay's Avatar Arjay is offline Moderator / MS MVP Power Poster Join Date Aug 2004 Posts 12,412 Re: How to tell if a file or a folder could be placed into the Recycle Bin? So it doesn't work when you specify the full path and the allow undo flag? 5. #5 Join Date Nov 2003 Location Portland, OR Posts 883 Re: How to tell if a file or a folder could be placed into the Recycle Bin? If I uncomment the FOF_NOCONFIRMATION flag in my code sample above and call it, say, on a very large folder, or a folder located on an external drive that doesn't have a Recycled Bin, it will be permanently deleted. Thus my question in the title. In other words, there's no way of knowing whether something will be placed into the bin or deleted for good. 6. #6 Arjay's Avatar Arjay is offline Moderator / MS MVP Power Poster Join Date Aug 2004 Posts 12,412 Re: How to tell if a file or a folder could be placed into the Recycle Bin? There doesn't seem to be with this api, does there? Can you take a different approach, such as IFileOperation (Vista and above only)? http://msdn.microsoft.com/en-us/libr...(v=vs.85).aspx 7. #7 Join Date Apr 2000 Location Belgium (Europe) Posts 4,626 Re: How to tell if a file or a folder could be placed into the Recycle Bin? only local files on a fixed disk can be recycled. networked (remote) and files on a non-fixed disk (USB sticks) can't be recycled. See GetDriveType() on MSDN 8. #8 Join Date Nov 2000 Location Voronezh, Russia Posts 6,543 Re: How to tell if a file or a folder could be placed into the Recycle Bin? Quote Originally Posted by dc_2000 View Post on a very large folder, or a folder located on an external drive that doesn't have a Recycled Bin, it will be permanently deleted. Thus my question in the title. You know, the answer seems to be pretty obvious: • In case your file system object is located on network resource, it cannot be put to recycle bin. • In case your file system object is located on a drive having no recycle bin folder (e.g. non-NTFS drive), it cannot be put to recycle bin. • In case your file system object is located on a drive with free space less than the object size, it cannot be put to recycle bin. Besides, that rec bin functionality is to be used explicitly by design, and the choice intentionally is up to end user. Because the user must be aware of the situation when disk space is about to be potentially wasted for a backup that nobody really needs. Best regards, Igor 9. #9 Join Date Apr 2000 Location Belgium (Europe) Posts 4,626 Re: How to tell if a file or a folder could be placed into the Recycle Bin? Quote Originally Posted by Igor Vartanov View Post * In case your file system object is located on a drive having no recycle bin folder (e.g. non-NTFS drive), it cannot be put to recycle bin. * In case your file system object is located on a drive with free space less than the object size, it cannot be put to recycle bin. neither of those 2 is correct. 1) A FAT drive can have a recycle bin, and in fact dis so by default in Win95, 98, ME. THe use of FAT for harddisks has since been somewhat obsolete (mainly because of the limitations of FAT) but even today, a harddisk formatted as FAT on WIN8 has a recycle bin. THe presence of a recycled folder is a nontest. It may have such a folder where it is not in fact a recyclable folder (this could happen if you share your C: drive on a network, someone makking that share ad say Z: would see the recycled folder but it wouldn't be used for deleting from the network (it would be used when deleting files from the local machine). I've also seen USB sticks with a recycled folder, how it got there... I don't know. 2) This too is nonvalid. Since the file is 'moved' into the recycled folder ON THE SAME DISK as where it is being removed from. deleting won't affect the free space before or after the operation. however, the recycled folder does have a size limit (default is 10% of the total disk capacity), so deleting any file larger than this capacity would not go into the recycle bin at all. YOu can change the limit by rightclicking on the recycle bin, and selecting properties. Besides, that rec bin functionality is to be used explicitly by design, and the choice intentionally is up to end user. Because the user must be aware of the situation when disk space is about to be potentially wasted for a backup that nobody really needs. but yes... THIS... stop working against the system, embrace what and how it does things rather than thinking you know better than the hundreds of designers at MS that made some of those decisions. 10. #10 VictorN's Avatar VictorN is offline Super Moderator Power Poster Join Date Jan 2003 Location Wallisellen (ZH), Switzerland Posts 18,681 Re: How to tell if a file or a folder could be placed into the Recycle Bin? Quote Originally Posted by OReubens View Post ... I've also seen USB sticks with a recycled folder, how it got there... I don't know. Then see https://www.google.com/search?source...lder&gs_htsa=1 Victor Nijegorodov 11. #11 Join Date Nov 2003 Location Portland, OR Posts 883 Re: How to tell if a file or a folder could be placed into the Recycle Bin? No, guys, there should be absolutely no guesswork involved in this. We're talking about potentially permanently deleting someone's files, and maybe even whole folders in a recursive fashion. Arjay, actually gave me a good hint. The documentation for SHFileOperation suggests using IFileOperation in one line of text. That seems to be a solution. (Unfortunately it takes more than one line of code to implement it.) All this time I was searching for a way to code that interface but all the examples I was able to come up with were pretty sketchy. So I'll post my entire working code sample here for anyone else who comes across this subject. Also before posting the code and the interface implementation, here's an overview in a nutshell. By implementing the IFileOperationProgressSink interface, you will have control over the entire deletion process, and namely, specific interface methods will be called before and after deleting each file. Luckily, the PreDeleteItem method will have the DWORD dwFlags parameter that will have the TSF_DELETE_RECYCLE_IF_POSSIBLE flag set if that specific item (file or folder) is being placed in the Recycle Bin. (That flag will not be set if the item is permanently deleted.) So your code can check, and if that flag is not on, simply react by either showing a user warning, or by automatically aborting the process. As an added bonus, you can also implement your own tracking of a lengthy deletion process by aborting it at your specific moment (which original SHFileOperation did not provide.) The only downside to my method below is that it is not available on Windows XP. So on that OS, you're stuck with SHFileOperation. OK, so having said that, now the "fun" part. I'm going to be using MFC for the COM wrapper. It will show the guts of the interface itself (in case Win32 people want to implement it as well.) First the method itself that is called to delete a folder or a file: (I'm using pseudo-code for brevity.) Code: BOOL DeleteToRecycleBin(LPCTSTR pStrItemPath) { //Delete 'pStrItemPath' item into the Recycle Bin //INFO: This method works on Vista and later OS! //INFO: The item will be deleted only if it can be placed into the Recycle Bin. //'pStrItemPath' = Full item path for a file or a folder to delete //RETURN: // = TRUE if done BOOL bRes = FALSE; if(pStrItemPath && pStrItemPath[0] != 0) { //Make sure it's not a relative path if(!PathIsRelative(pStrItemPath)) { //Initialize COM as STA. HRESULT hr = ::CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE); if(SUCCEEDED(hr)) { //Create the IFileOperation interface CComPtr<IFileOperation> pfo; if(SUCCEEDED(hr = CoCreateInstance(CLSID_FileOperation, NULL, CLSCTX_ALL, IID_PPV_ARGS(&pfo)))) { //Set operation flags if(SUCCEEDED(hr = pfo->SetOperationFlags( FOF_ALLOWUNDO | FOF_SILENT | FOF_NOERRORUI | FOFX_EARLYFAILURE | FOF_NO_UI ))) { //Create an IShellItem from the supplied path. CComPtr<IShellItem> psiToDelete; if(SUCCEEDED(hr = SHCreateItemFromParsingName(pStrItemPath, NULL, IID_PPV_ARGS(&psiToDelete)))) { //Initialize our class with the IFileOperationProgressSink implementation CRecycleBinOps* pThis = new CRecycleBinOps(); IFileOperationProgressSink* pSink = (IFileOperationProgressSink*)pThis->GetInterface(&IID_IFileOperationProgressSink); if(pSink) { //Add the operation if(SUCCEEDED(hr = pfo->DeleteItem(psiToDelete, pSink))) { //And perform the operation if(SUCCEEDED(hr = pfo->PerformOperations())) { //Done bRes = TRUE; } } } delete pThis; pThis = NULL; } } } //Unit COM ::CoUninitialize(); } } } return bRes; } And now the CRecycleBinOps class that implements the IFileOperationProgressSink interface that has all the callback methods we need: Code: //.h file class CRecycleBinOps : public CCmdTarget { public: CRecycleBinOps(); ~CRecycleBinOps(void); private: DECLARE_INTERFACE_MAP(); BEGIN_INTERFACE_PART(Ifops, IFileOperationProgressSink) STDMETHOD(StartOperations)( void); STDMETHOD(FinishOperations)( /* [in] */ HRESULT hrResult); STDMETHOD(PreRenameItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName); STDMETHOD(PostRenameItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [string][in] */ __RPC__in LPCWSTR pszNewName, /* [in] */ HRESULT hrRename, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated); STDMETHOD(PreMoveItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName); STDMETHOD(PostMoveItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [in] */ HRESULT hrMove, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated); STDMETHOD(PreCopyItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName); STDMETHOD(PostCopyItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [in] */ HRESULT hrCopy, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated); STDMETHOD(PreDeleteItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem); STDMETHOD(PostDeleteItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ HRESULT hrDelete, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated); STDMETHOD(PreNewItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName); STDMETHOD(PostNewItem)( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszTemplateName, /* [in] */ DWORD dwFileAttributes, /* [in] */ HRESULT hrNew, /* [in] */ __RPC__in_opt IShellItem *psiNewItem); STDMETHOD(UpdateProgress)( /* [in] */ UINT iWorkTotal, /* [in] */ UINT iWorkSoFar); STDMETHOD(ResetTimer)( void); STDMETHOD(PauseTimer)( void); STDMETHOD(ResumeTimer)( void); END_INTERFACE_PART(Ifops) }; and the implementation part: Code: //.cpp file #include "StdAfx.h" #include "RecycleBinOps.h" CRecycleBinOps::CRecycleBinOps() { } CRecycleBinOps::~CRecycleBinOps(void) { } BEGIN_INTERFACE_MAP(CRecycleBinOps, CCmdTarget) INTERFACE_PART(CRecycleBinOps, IID_IFileOperationProgressSink, Ifops) END_INTERFACE_MAP() ULONG CRecycleBinOps::XIfops::AddRef() { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return pThis->ExternalAddRef(); } ULONG CRecycleBinOps::XIfops::Release() { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return pThis->ExternalRelease(); } HRESULT CRecycleBinOps::XIfops::QueryInterface(REFIID riid, void ** ppvObj) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return pThis->ExternalQueryInterface( &riid, ppvObj ); } HRESULT CRecycleBinOps::XIfops::StartOperations(void) { //StartOperations is the first of the IFileOperationProgressSink methods to be called after PerformOperations. //It can be used to perform any setup or initialization that you require before the file operations begin. METHOD_PROLOGUE(CRecycleBinOps, Ifops); //If this method succeeds, it returns S_OK. Otherwise, it returns an HRESULT error code. return S_OK; } HRESULT CRecycleBinOps::XIfops::FinishOperations( /* [in] */ HRESULT hrResult) { //Performs caller-implemented actions after the last operation performed by the call to IFileOperation is complete. //'hrResult' = final result of the operation METHOD_PROLOGUE(CRecycleBinOps, Ifops); return S_OK; } HRESULT CRecycleBinOps::XIfops::PreRenameItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PostRenameItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [string][in] */ __RPC__in LPCWSTR pszNewName, /* [in] */ HRESULT hrRename, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PreMoveItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PostMoveItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [in] */ HRESULT hrMove, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PreCopyItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PostCopyItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [in] */ HRESULT hrCopy, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PreDeleteItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem) { //Performs caller-implemented actions before the delete process for each item begins. METHOD_PROLOGUE(CRecycleBinOps, Ifops); //See if we're deleting into Recycle Bin if(!(dwFlags & TSF_DELETE_RECYCLE_IF_POSSIBLE)) { //We're not! Don't allow to continue return E_ABORT; } //Returns S_OK if successful, or an error value otherwise. //In the case of an error value, the delete operation and all subsequent operations pending from the call to IFileOperation are canceled. return S_OK; } HRESULT CRecycleBinOps::XIfops::PostDeleteItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiItem, /* [in] */ HRESULT hrDelete, /* [in] */ __RPC__in_opt IShellItem *psiNewlyCreated) { //Performs caller-implemented actions after the delete process for each item is complete. METHOD_PROLOGUE(CRecycleBinOps, Ifops); //Here if you want to see if this item was permanently deleted //do this post-op check: if(psiNewlyCreated == NULL) { //This item was permanently deleted //Note that at this point the file/folder is already gone! } //Returns S_OK if successful, or an error value otherwise. //In the case of an error value, all subsequent operations pending from the call to IFileOperation are canceled. return S_OK; } HRESULT CRecycleBinOps::XIfops::PreNewItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PostNewItem( /* [in] */ DWORD dwFlags, /* [in] */ __RPC__in_opt IShellItem *psiDestinationFolder, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszNewName, /* [string][unique][in] */ __RPC__in_opt LPCWSTR pszTemplateName, /* [in] */ DWORD dwFileAttributes, /* [in] */ HRESULT hrNew, /* [in] */ __RPC__in_opt IShellItem *psiNewItem) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::UpdateProgress( /* [in] */ UINT iWorkTotal, /* [in] */ UINT iWorkSoFar) { //Provides an estimate of the total amount of work currently done in relation to the total amount of work. METHOD_PROLOGUE(CRecycleBinOps, Ifops); //For more details on this method and the meaning of these parameters check MSDN: // http://msdn.microsoft.com/en-us/library/windows/desktop/bb775753(v=vs.85).aspx //If this method succeeds, it returns S_OK. Otherwise, it returns an HRESULT error code. return S_OK; } HRESULT CRecycleBinOps::XIfops::ResetTimer( void) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::PauseTimer( void) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } HRESULT CRecycleBinOps::XIfops::ResumeTimer( void) { METHOD_PROLOGUE(CRecycleBinOps, Ifops); return E_NOTIMPL; //S_OK; } I know it's a lot of code, but I couldn't seem to find a simpler solution. If anyone wants to add anything to it, feel free to post below. 12. #12 Arjay's Avatar Arjay is offline Moderator / MS MVP Power Poster Join Date Aug 2004 Posts 12,412 Re: How to tell if a file or a folder could be placed into the Recycle Bin? Quote Originally Posted by dc_2000 View Post OK, so having said that, now the "fun" part. I'm going to be using MFC for the COM wrapper. Check out doing the COM stuff with ATL (as MFC is extremely clunky for doing anything COM). Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •   Windows Mobile Development Center Click Here to Expand Forum to Full Width This a Codeguru.com survey! HTML5 Development Center
__label__pos
0.91769
Take the tour × Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required. I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) //right join d in contextB.CaseLog on d.ID = c.ID.... select new { //setup fields here... }); share|improve this question add comment 3 Answers up vote 2 down vote accepted Your best bet, outside of database solutions, is to join using LINQ (to objects) after execution. I realize this isn't the solution you were hoping for. At least at this level, you won't have to worry about the IN list limitation (.Contains) Edit: outside of database solutions above really points to linked server solutions where you allow the table/view from context A to exist in the database from context B. share|improve this answer add comment you could try using a GetTable command. I think this loads all of contextB.TableB's data first, not 100% sure on that though. I don't have an environment set up to play around in or test this out so let me know if it works =) from a in contextA.TableA join b in contextB.GetTable<TableB>() on a.id equals b.id select new { a, b } share|improve this answer   I'm not quite sure why this is upvoted. GetTable<type> is the underlying call behind the plural object properties (ie. TableBs in this case). This won't work anymore than if you called the property directly. –  Marc Mar 24 '10 at 18:30 add comment If you cannot extract the 2 tables into List objects and then join them then you will probably have to do something database side. I would recomend creating a linked server and a view on the DB server you have control of. You can then do the join in the view and you would have a very simple LINQ query to just retrieve the view. I am njot sure how LINQtoSQL could every do a join between 2 data contexts pointing to 2 different servers. share|improve this answer add comment Your Answer   discard By posting your answer, you agree to the privacy policy and terms of service. Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.696791
summaryrefslogtreecommitdiff path: root/src/modules/snow/snow.c blob: 8befc9202e50246e33c0d1f60e112ee4fa624113 (plain) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 #include <stdint.h> #include <stdlib.h> #include <time.h> #include "til.h" #include "til_fb.h" /* Copyright (C) 2019 - Vito Caputo <[email protected]> */ /* This implements white noise / snow just using rand() */ typedef union snow_seed_t { char __padding[256]; /* prevent seeds sharing a cache-line */ int seed; } snow_seed_t; typedef struct snow_context_t { unsigned unused; snow_seed_t seeds[]; } snow_context_t; static void * snow_create_context(unsigned ticks, unsigned n_cpus, til_setup_t *setup) { snow_context_t *ctxt; ctxt = calloc(1, sizeof(snow_context_t) + n_cpus * sizeof(snow_seed_t)); if (!ctxt) return NULL; for (unsigned i = 0; i < n_cpus; i++) ctxt->seeds[i].seed = rand(); return ctxt; } static void snow_destroy_context(void *context) { free(context); } static void snow_prepare_frame(void *context, unsigned ticks, unsigned n_cpus, til_fb_fragment_t *fragment, til_fragmenter_t *res_fragmenter) { *res_fragmenter = til_fragmenter_slice_per_cpu; } static void snow_render_fragment(void *context, unsigned ticks, unsigned cpu, til_fb_fragment_t *fragment) { snow_context_t *ctxt = context; int *seed = &ctxt->seeds[cpu].seed; for (unsigned y = fragment->y; y < fragment->y + fragment->height; y++) { for (unsigned x = fragment->x; x < fragment->x + fragment->width; x++) { #ifdef __WIN32__ uint32_t pixel = rand(); #else uint32_t pixel = rand_r(seed) % 256; #endif til_fb_fragment_put_pixel_unchecked(fragment, 0, x, y, pixel << 16 | pixel << 8 | pixel); } } } til_module_t snow_module = { .create_context = snow_create_context, .destroy_context = snow_destroy_context, .prepare_frame = snow_prepare_frame, .render_fragment = snow_render_fragment, .name = "snow", .description = "TV snow / white noise (threaded)", .author = "Vito Caputo <[email protected]>", }; © 2021 All Rights Reserved
__label__pos
0.999474
Writen by Gediel Luchetta, 4 minutes of reading How to increase the chances of successfully modernizing your legacy system Learn about two key approaches that increase the likelihood of success in application modernization initiatives. Share: If your company is losing market share or not growing as planned due to factors such as slow pace of innovation or high operating costs, considering an investment to modernize your legal system is something that requires attention. In this article, I have listed 4 reasons to invest in this type of project.  As mentioned above, modernizing a core system is often a long and costly journey. However, there are two important approaches that increase the likelihood of success in this area. Find out more about each of them below.   1. Identify what the main pain is and start with it Before starting, make a correct diagnosis of the current scenario, determining where the biggest offenders for business results are and apply the initial attack strategy – i.e., how to get results quickly and with the lowest investment. For example, if the biggest problem for the current business is the slowdown in growth or the evasion of customers due to a limitation in the product’s interface, modernization can start by separating the frontend and backend layers and, with that, apply new technologies to unlock new interfaces for other devices, while the backend remains the same. Another possibility is when the main issue is the high cost of your own data center’s infrastructure and maintenance. One action could be migrating (lift and shift) to the cloud or the replacement of proprietary RDBMS — with high licensing costs — for an open-source database solution. It is important to point out that we’re talking about phasing and an early attack strategy that should solve one of the pains. Solving all the pain points will take a complete modernization, rewriting the software with the end customer at the center and using the best practices of software architecture, infrastructure, technology stack, software engineering best practices and DevSecOps.   2. Take advantage of the modernization initiative to create a truly “customer-centric” product Use a “customer-centric” approach and make a discovery to determine which features really generate value for your end customer. A very common mistake of teams involved in modernization initiatives is to assume that all the functionalities present in the current software need to be modernized and maintained in the new solution. This premise also generates a false sense that the barriers to entry for competitors are higher than they really are. As an exercise, let us imagine a legacy system module that has, hypothetically, 20 functionalities. In the modernization project, should we consider migrating them all? Do end users really need and use your product because of all these features? In practice, we see many startups emerging and gaining market share with a modern, lean, and easy-to-use digital product, with only two or three of these features In summary, a discovery process with the end customer at the center, understanding what really generates value, can determine a great simplification and optimization of the modernization journey, or bring opportunities for growth with innovations in new functionalities that were not present in the product nor in the legacy system roadmap.   Read too: The modernization of Unicred’s banking core ensured availability for its members and capacity for innovation   The decision for a modernization project is multifactorial. The gains from this investment include greater agility to innovate, reduced time-to-market for new functions and features, reduced operating costs, in addition to expanding the ability to trace faults, ensuring greater assertiveness and speed in solving problems. Using an up-to-date code language with an excellent reputation also brings long-term product security, as skilled labor is widely available. Share it:
__label__pos
0.525977
Rule Machine Global Variables Rule Machine Global Variables [Beta] In response to community requests, we have added Global Variables to Rule Machine. This new feature is a beta release. There is preliminary functionality within a framework. We're looking for feedback as to additional functionality that is needed. Creating Global Variables Before you can use a Global Variable you must first create it in the Rule Machine app. There are four types: Number, Decimal, String and Boolean. Numbers are integer values, while Decimal has decimal point and decimal fraction. Strings are text. Boolean are true or false. Setting the value of a Global Variable You can set the value of a Global Variable in Rule actions. For Numbers and Decimals, there are four options: set the value, add to the value (can be positive or negative, so subtraction is possible), set the value from a sensor current value, and set the value from another variable. An offset can be added to a sensor value or the value of another variable. For Strings the value can be set. When setting a String value, %device%, %value%, %time%, and %date% are available for most recent event for that rule, and a Global Variable value can be included with the name of the variable inside braces, e.g. {variableName}. For Boolean, you can set it to true or false. Using Global Variables as Rule Conditions or Trigger Events After you have defined a Global Variable in Rule Machine you can use it as a Condition or Trigger Event in a Rule, Trigger, or Triggered Rule. For Numbers and Decimals, six comparison operators are available, and for Strings and Boolean equal and not equal comparisons are available. When a Global Variable changes value, the Rule with the changed variable in a Condition is evaluated, or the Trigger Event is fired. With conditions for sensors, the sensor value can be compared to a variable value, with optional offset. With conditions for variables, the variable value can be compared to another variable value, with optional offset. Using Global Variables in Notifications You can put the value of a Global Variable in the message sent for a notification. Simply put the name of the Global Variable in curly braces, as in {My-Variable}, in the string for the message. WARNING about Race Conditions Since multiple rules can reference and/or set Global Variables, it is quite possible for there to be race conditions caused by multiple simultaneous interactions with a Global Variable. For example, if two different rules, or even two different instances of the same rule, set a Global Variable at the same time, the resulting value may be one or the other value set, with no assurance as to which value it is. Similarly, a rule that references a Global Variable will get the most recent value, which may change an instant later. Think through such interactions to avoid problems and surprises. Screen Shots Below are a few screen shots to show how this all hangs together. The first one is from Rule Machine after creating two Global Variables: Now, we have a simple trigger that will put values in those variables: Notice that Beta is set to the current temperature of a sensor. Also, notice in the string to set Charlie, how it references Beta with { x }, to put that value in the string. Now, we can see the Global Variable values back in Rule Machine after that trigger fires. You can just refresh the page to see the updated variables: 15 Likes Love this! Can we add Global Variables to the Button App? The button App is the closest thing to a CASE STATEMENT on this platform. I'd like to make a mood variable to run along side mode. I want to try a global string for mood that I can set via a button controller like the MI Cube. These global variables work only within Rule Machine. They aren't global in the sense of being system-wide. We have looked at system-wide variables, such as your Mood, and most of the platform elements exist to support them. These will be forthcoming in a future release. But, getting apps to make use of them would be a whole other rather large undertaking. This is going to be fun :grin: 1 Like There is a bug in 2.0.5.112 that prevents the setting of a global variable from a sensor device. This will be fixed in a hot fix today. 5 Likes This is awesome functionality, I immediately found a use and have implemented it. While my clothes dryer is running, I want to know if the temperature decreases (ran out of propane). Now I have a trigger to set a Prior Temperature variable 2 minutes after the Dryer sensor reports. In a rule I compare the two. Not sure if this would have been possible previously 1 Like @bravenel hey Bruce. Just wondering the best way to capture continual changes in illuminance to a global variable but only when a specific switch is off? The following didn’t work for me (my thoughts where to compare device illuminance to itself or to the stored variable name and when not equal to save that to the variable) (“Lux. Master Bedroom” is the global variable name as well as the rule name) 1 Like Sorted. In case anyone is interested, this is what I did: Create a new global variable with type set to decimal. Create a new triggered rule with the following: This rule is called “Lux. Lounge” and so is the rule. This now saves the illuminance value of the motion sensor into a global variable but only when the lights in the room are off. Just like how I used to have it on WebCoRE in the ST days :blush: I’m now referencing that variable to determine if Lights should come on or not in other rules 3 Likes Hello, I had Hubitat now for just over a week so very new to it yet and have a lot to learn. These forums have been a great help and seems to have a great core of people interested in helping each other! With that I’ve been working with rules machine and reading quite a bit on best practices and setup for efficiency. I wavered back and forth with having a simple set of Modes (Night, Morning, Afternoon, Evening) to incorporating presence into the string (Night, Night-Away, Morning, Morning-Away, Afternoon, Afternoon-Away, Evening, Evening-Away). This seemed to be a benefit for use in Rules Machine and its restriction functionality. As I understand it the restriction functionality will cut down overhead when used properly as it terminates rule processing / evaluation immediately upon a restriction being met; prior to getting into the actual workload of pushing device and any other environment information into the database and evaluating. Due the impact restrictions have on rules machine flow it seemed that adding the -Away modes may have been beneficial. There are Pro’s and Con’s with everything and the use of modes seems to be a highly discussed variable for both Smartthings and Hubitat. Global Variable are a great addition and will save on virtual device creation to get a global variable like avenue to use for rules, reference, etc… In reading the Global Variable documentation compiled by bravenel we will not be able to pick up dynamic changes to the variable value on the fly; the process using the variable will pick up the new value on it’s next execution or attribute load into a process. My question is are there plans to add global variable evaluation into Rule Machines Restrictions? Similar to the mode restriction where you can select a mode\modes for a rule to progress, possibly have the ability to evaluate a Global Variable in an expression for validity like {My-bHome) = True; where bHome is boolean and is true for home false for away. Maybe this option would only be for very simple evaluation of global variables. Hopefully this makes sense and I didn’t miss discussion on it elsewhere. Again, still very new to this and learning…hope I’m making sense! 1 Like This "savings" is not material. The largest expense of any app is the loading of the app itself, not its execution. So the difference between evaluating a restriction and doing that plus evaluating a rule is small. Therefore, using restrictions to be more efficient is not what I'd call a valid strategy. Not sure what you're thinking of with this. If a rule uses a global variable as a condition, then any change to the value of that global variable (caused by some other rule) will trigger the evaluation of the rule. Likewise, a global variable as a trigger event will cause that trigger to be fired when the global variable value changes to the desired value. This is all happening dynamically. Every event that a rule is connected to causes the rule to execute, sometimes only to discover that it was paused or restricted, or that the value wasn't the right value, or that the rule's truth didn't change -- in each case no action is taken. No, there is no plan to add global variables to restrictions. Restrictions are not intended to be some second rule within a rule. They serve a very limited purpose, and I see no virtue in expanding their scope. Thank you for the quick response bravenel! I must have misunderstood what I had read in the best practices topic for rules machine and restrictions. Seems I need to do further experimenting with a rules and triggers to better understand just what causes them to load, evaluate, and execute. I appreciate the feedback and will get after some testing and logging. @bravenel what are your thoughts on adding the ability to use a GV (number) as an input for a "delay"? Example: True Action: Delay by {variable} minutes (or seconds). Or for that matter be able to use a GV as a number input throughout RM? This may be a little too major of a code change, but I can think of a couple ways I could use it in my automations. 2 Likes It's not an unreasonable request. That's one of the reasons this is Beta, to get this sort of feedback. I will look into it. One of the issues it presents is the difference between a number and a string. It is possible to input a number as a string, but it does have impact as the string then has to be converted to a number in certain contexts. Presumably, the way we'd do this is to just allow {variable} as the input for a number. It's not a small change. 2 Likes A minor but of feedback, I cannot set a GV to a negative number in the RM. Would be a nice option, but real easy to RM around. What context was this? APPS/RM/Create a rule/define a rule/rule name/select a condition/between two dates/hurricane start and end/done/actions for true/set GV/pick GV/number/etc/"-10"/ must be greater than 0/, Just feedback, and curiosity if the numeric GV's are strictly >0? If you use a "decimal" variable type instead of "number" it will allow negative values There is the workaround, and I was just about to mention the same issue in the context of the set GV "add number" has no "subtract number" equivalent! The key benefit to the "number" variable is that if you're using it in a "notification" it doesn't read out "dot-0". Otherwise "decimal" has definite advantages, I use decimal variables to countdown (using add number -xx) but I use number variables for notifications . This will be fixed. 1 Like Download the Hubitat app
__label__pos
0.533483
blob: 3078c403b42d84029968b23ff20b39ebd4af295d [file] [log] [blame] /* * Dove thermal sensor driver * * Copyright (C) 2013 Andrew Lunn <[email protected]> * * This software is licensed under the terms of the GNU General Public * License version 2, as published by the Free Software Foundation, and * may be copied, distributed, and modified under those terms. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * */ #include <linux/device.h> #include <linux/err.h> #include <linux/io.h> #include <linux/kernel.h> #include <linux/of.h> #include <linux/module.h> #include <linux/platform_device.h> #include <linux/thermal.h> #define DOVE_THERMAL_TEMP_OFFSET 1 #define DOVE_THERMAL_TEMP_MASK 0x1FF /* Dove Thermal Manager Control and Status Register */ #define PMU_TM_DISABLE_OFFS 0 #define PMU_TM_DISABLE_MASK (0x1 << PMU_TM_DISABLE_OFFS) /* Dove Theraml Diode Control 0 Register */ #define PMU_TDC0_SW_RST_MASK (0x1 << 1) #define PMU_TDC0_SEL_VCAL_OFFS 5 #define PMU_TDC0_SEL_VCAL_MASK (0x3 << PMU_TDC0_SEL_VCAL_OFFS) #define PMU_TDC0_REF_CAL_CNT_OFFS 11 #define PMU_TDC0_REF_CAL_CNT_MASK (0x1FF << PMU_TDC0_REF_CAL_CNT_OFFS) #define PMU_TDC0_AVG_NUM_OFFS 25 #define PMU_TDC0_AVG_NUM_MASK (0x7 << PMU_TDC0_AVG_NUM_OFFS) /* Dove Thermal Diode Control 1 Register */ #define PMU_TEMP_DIOD_CTRL1_REG 0x04 #define PMU_TDC1_TEMP_VALID_MASK (0x1 << 10) /* Dove Thermal Sensor Dev Structure */ struct dove_thermal_priv { void __iomem *sensor; void __iomem *control; }; static int dove_init_sensor(const struct dove_thermal_priv *priv) { u32 reg; u32 i; /* Configure the Diode Control Register #0 */ reg = readl_relaxed(priv->control); /* Use average of 2 */ reg &= ~PMU_TDC0_AVG_NUM_MASK; reg |= (0x1 << PMU_TDC0_AVG_NUM_OFFS); /* Reference calibration value */ reg &= ~PMU_TDC0_REF_CAL_CNT_MASK; reg |= (0x0F1 << PMU_TDC0_REF_CAL_CNT_OFFS); /* Set the high level reference for calibration */ reg &= ~PMU_TDC0_SEL_VCAL_MASK; reg |= (0x2 << PMU_TDC0_SEL_VCAL_OFFS); writel(reg, priv->control); /* Reset the sensor */ reg = readl_relaxed(priv->control); writel((reg | PMU_TDC0_SW_RST_MASK), priv->control); writel(reg, priv->control); /* Enable the sensor */ reg = readl_relaxed(priv->sensor); reg &= ~PMU_TM_DISABLE_MASK; writel(reg, priv->sensor); /* Poll the sensor for the first reading */ for (i = 0; i < 1000000; i++) { reg = readl_relaxed(priv->sensor); if (reg & DOVE_THERMAL_TEMP_MASK) break; } if (i == 1000000) return -EIO; return 0; } static int dove_get_temp(struct thermal_zone_device *thermal, unsigned long *temp) { unsigned long reg; struct dove_thermal_priv *priv = thermal->devdata; /* Valid check */ reg = readl_relaxed(priv->control + PMU_TEMP_DIOD_CTRL1_REG); if ((reg & PMU_TDC1_TEMP_VALID_MASK) == 0x0) { dev_err(&thermal->device, "Temperature sensor reading not valid\n"); return -EIO; } /* * Calculate temperature. See Section 8.10.1 of 88AP510, * Documentation/arm/Marvell/README */ reg = readl_relaxed(priv->sensor); reg = (reg >> DOVE_THERMAL_TEMP_OFFSET) & DOVE_THERMAL_TEMP_MASK; *temp = ((2281638UL - (7298*reg)) / 10); return 0; } static struct thermal_zone_device_ops ops = { .get_temp = dove_get_temp, }; static const struct of_device_id dove_thermal_id_table[] = { { .compatible = "marvell,dove-thermal" }, {} }; static int dove_thermal_probe(struct platform_device *pdev) { struct thermal_zone_device *thermal = NULL; struct dove_thermal_priv *priv; struct resource *res; int ret; res = platform_get_resource(pdev, IORESOURCE_MEM, 0); if (!res) { dev_err(&pdev->dev, "Failed to get platform resource\n"); return -ENODEV; } priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; priv->sensor = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(priv->sensor)) return PTR_ERR(priv->sensor); res = platform_get_resource(pdev, IORESOURCE_MEM, 1); if (!res) { dev_err(&pdev->dev, "Failed to get platform resource\n"); return -ENODEV; } priv->control = devm_ioremap_resource(&pdev->dev, res); if (IS_ERR(priv->control)) return PTR_ERR(priv->control); ret = dove_init_sensor(priv); if (ret) { dev_err(&pdev->dev, "Failed to initialize sensor\n"); return ret; } thermal = thermal_zone_device_register("dove_thermal", 0, 0, priv, &ops, NULL, 0, 0); if (IS_ERR(thermal)) { dev_err(&pdev->dev, "Failed to register thermal zone device\n"); return PTR_ERR(thermal); } platform_set_drvdata(pdev, thermal); return 0; } static int dove_thermal_exit(struct platform_device *pdev) { struct thermal_zone_device *dove_thermal = platform_get_drvdata(pdev); thermal_zone_device_unregister(dove_thermal); platform_set_drvdata(pdev, NULL); return 0; } MODULE_DEVICE_TABLE(of, dove_thermal_id_table); static struct platform_driver dove_thermal_driver = { .probe = dove_thermal_probe, .remove = dove_thermal_exit, .driver = { .name = "dove_thermal", .owner = THIS_MODULE, .of_match_table = of_match_ptr(dove_thermal_id_table), }, }; module_platform_driver(dove_thermal_driver); MODULE_AUTHOR("Andrew Lunn <[email protected]>"); MODULE_DESCRIPTION("Dove thermal driver"); MODULE_LICENSE("GPL");
__label__pos
0.996968
Hey WeLiveSecurity, how does biometric authentication work? Your eyes may be the window to your soul, but they can also be your airplane boarding pass or the key unlocking your phone. What’s the good and the bad of using biometric traits for authentication? The post Hey WeLiveSecurity, how does biometric authentication work? appeared first on WeLiveSecurity
__label__pos
1
What is the speed of Internet Connection (port speed) for Shared Hosting, VPS Server and Dedicated Server? For VPS Hosting All the Windows VPS & Linux VPS are sharing 1 Gbps port connectivity and we don't offer any minimum commitment for port speed. On certain time slots, customers are receiving upto 800 Mbps speed and on certain time slots, they are receiving about 10 Mbps. If you want dedicated 1 Gbps port connectivity, we can offer that to you. Dedicated 1Gbps port will cost you around $3000 per month. Note: The port speed is the rate at which you can upload or download data to and from your VPS. For Shared Hosting All our shared hosting servers are connected with 1Gbps to 10Gbps port speed. This speed is shared among all the shared hosting customers. We don't offer any minimum commitment for port speed. For uploading & downloading, the speed varies from 5 Mbps to 800 Mbps. For Dedicated Servers All dedicated servers are connected with 1Gbps port or 100 Mbps unmetered port - as per your choice during the order. • 0 Users Found This Useful Was this answer helpful? Related Articles What Operating Systems do you offer on Windows VPS? We do offer below mentioned Operating Systems with our Windows VPS plans: Windows 2008 R2... What virtualization technology does AccuWebHosting use for Windows VPS? All our Windows HyperVisors are powered by Microsoft Hyper-V virtualization technology. However,... Do you provide MS SQL Server free of cost with Windows VPS? Yes, we do provide MSSQL 2008/2012/2014/2016 Express edition without any additional cost with all... Do you provide fully managed VPS services? By default, all VPS plans come with basic support on 24*7 basis with no additional cost. However,... Can I get IP addresses of specific class with Windows VPS plan? Yes. We can provide you an IP addresses of specific class as per availability. However, you...
__label__pos
0.711598
16.00 - Setting State-Specific Job Limits for Utility Limits - Teradata Viewpoint Teradata Viewpoint User Guide prodname Teradata Viewpoint vrm_release 16.00 created_date October 2016 category User Guide featnum B035-2206-106K Set state-specific job limits for existing utility limits or when creating a utility limit. You can override the default by setting job limits on a per-state basis. For example, you might want to raise the job limit during a low-traffic state, and lower the job limit during a high-traffic state. 1. Edit or create a ruleset. 2. From the ruleset toolbar, click Sessions. 3. Click the Utility Limits tab. 4. Select a utility limit or create one. 5. Click the State Specific Settings tab. 6. [Optional] Do any of the following: Option Description Set a default job limit for the utility limit in all states 1. Under Default Settings, select the Job Limit check box and enter a number. 2. Select Delay or Reject. Set a job limit for the utility limit in a specific state 1. Next to a state, click . 2. Select Create State Specific Settings. 3. Select the Job Limit check box and enter a number. 4. Select Delay or Reject. 5. Click OK. 7. Click Save.
__label__pos
0.868633
0 $\begingroup$ I know factoring is the chief means of breaking RSA keys. I know an algorithm that runs in polynomial time would be able to break an RSA key pair "quickly". But how quickly is "quickly"? Note, I'm not talking about any quantum computing at all here. $\endgroup$ • $\begingroup$ Big-O Notation (poly time is usually the union $\cup_{x \in \mathbb{N}} O(n^x)$) does not indicate actual computation time. It is a statement about the asymptotic computation time for $n \rightarrow \infty$ and ignores constant factors. $\endgroup$ – tylo Feb 5 '15 at 12:25 8 $\begingroup$ I know an algorithm that runs in polynomial time would be able to break an RSA key pair "quickly". But how quickly is "quickly"? No way to say, it might be microseconds, and it might be large multiplies of the age of the universe. When we say that an algorithm runs in polynomial time, we're not saying anything about how fast the algorithm runs given any particular input. Instead, what we're saying that, as we give the algorithm increasingly large inputs, the time it takes doesn't increase that quickly. How polynomial time is generally expressed is that there are values $c, e$ such that, given a problem of size $N$ (and in the factorization case, $N$ would be the number of bits in the RSA key), the algorithm takes time less than $cN^e$. Now, there are no limits on how big $c$ and $e$ might be, and so this doesn't give any limits on how much time a specific instance might take. On the other hand, for all known factorization algorithms, this is not true -- no matter how large values we select for $c$ and $e$, we can find problem sizes $N$ large enough that the algorithm takes more than $cN^e$ time; hence saying that an algorithm is "polytime" does say something -- it just doesn't say what you're hoping it did. $\endgroup$ • 1 $\begingroup$ Suggestions: "what we're saying is that, as we give the algorithm increasingly large inputs, the time it takes doesn't increase more quickly than some limit. $\;$ Perhaps also, change $N$ to $n$ and $e$ to $k$ as this is more consistent with standard notation in an RSA context. $\endgroup$ – fgrieu Feb 4 '15 at 17:52 2 $\begingroup$ The rationale behind polynomial vs. exponential is in tweaking the size of the keys. We need to achieve mainly two goals: • Encryption and decryption by legitimate users is reasonable fast. • Decryption by adversary without private key knowledge is prohibitively slow. (One way for decrypting by adversary might be computing private key from public key and subsequent decryption with the private key.) When the size of the key grows, time of all these operations also grows bot both legitimate users and adversaries. We want to hugely (prohibitively) increase the computation time for adversaries while maintaining only small increase of time for legitimate users (i.e. for encryption with public key and decryption with private key). The encryption schemes are usually designed to increase the time polynomially for legitimate users and exponentially for adversaries. Note that if legitimate users can do the tasks in polynomial time, we can't make it any harder than exponential for adversary, because the best attack can't perform worse than brute-force. This is the way we make the gap between legitimate users and adversaries huge. When a polynomial algorithm for factorisation is found, the gap will at least get lower. It might be a practical algorithm (and thus effectively a RSA break), but not necessarily: • It might be an algorithm with a high degree of polynomial (e.g. $n^{10000}$), which will perform better than today's algorithms only on very very large inputs and which is clearly impractical. • It might be an algorithm with a promising asymptotic complexity (e.g. $n^2$), but it would have a significant multiplicative or additive constant (e.g. $2^{128}$ seconds). Nevertheless, when a polynomial-time integer factorisation algorithm is found, we have to at least reconsider usage of RSA. Such algorithm would be a warning sign that RSA is weaker than we thought. $\endgroup$ • $\begingroup$ Maybe that's nitpicking but the distinction is polynomial vs. non-polynomial rather than polynomial vs exponential. We know subexponetial algorithms to factor large numbers as well as finding discrete logarithms in finite fiels. $\endgroup$ – Alexandre Yamajako Feb 4 '15 at 22:45 • $\begingroup$ @AlexandreYamajako I agree there are some super-polynomial and sub-exponential algorithms. They are usually $2^{\frac{n}{k}}$ for some constant $k$. How should I name them? Almost-exponential? (By the way, I am not sure if any algorithm harder than polynomial and easier than $2^{\frac{n}{k}}$.) $\endgroup$ – v6ak Feb 5 '15 at 6:33 Your Answer By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy Not the answer you're looking for? Browse other questions tagged or ask your own question.
__label__pos
0.876038
. * * @package mahara * @subpackage blocktype-gallery * @author Catalyst IT Ltd * @author Gregor Anzelj (External Galleries, e.g. Flickr, Picasa) * @license http://www.gnu.org/copyleft/gpl.html GNU GPL * @copyright (C) 2006-2009 Catalyst IT Ltd http://catalyst.net.nz * @copyright (C) 2011 Gregor Anzelj * */ defined('INTERNAL') || die(); class PluginBlocktypeGallery extends PluginBlocktype { public static function get_title() { return get_string('title', 'blocktype.file/gallery'); } public static function get_description() { return get_string('description', 'blocktype.file/gallery'); } public static function get_categories() { return array('fileimagevideo'); } public static function get_instance_javascript(BlockInstance $instance) { $configdata = $instance->get('configdata'); switch ($configdata['style']) { case 0: // thumbnails case 2: // squarethumbs return array(); case 1: // slideshow return array('js/slideshow.js'); } } public static function get_instance_config_javascript() { return array('js/configform.js'); } public static function render_instance(BlockInstance $instance, $editing=false) { $configdata = $instance->get('configdata'); // this will make sure to unserialize it for us $configdata['viewid'] = $instance->get('view'); $style = isset($configdata['style']) ? intval($configdata['style']) : 2; $copyright = null; // Needed to set Panoramio copyright later... switch ($style) { case 0: // thumbnails $template = 'thumbnails'; $width = isset($configdata['width']) ? $configdata['width'] : 75; break; case 1: // slideshow $template = 'slideshow'; $width = isset($configdata['width']) ? $configdata['width'] : 400; break; case 2: // square thumbnails $template = 'squarethumbs'; $width = isset($configdata['width']) ? $configdata['width'] : 75; break; } $images = array(); $slimbox2 = get_config_plugin('blocktype', 'gallery', 'useslimbox2'); if ($slimbox2) { $slimbox2attr = 'lightbox_' . $instance->get('id'); } else { $slimbox2attr = null; } // if we're trying to embed external gallery (thumbnails or slideshow) if (isset($configdata['select']) && $configdata['select'] == 2) { $gallery = self::make_gallery_url($configdata['external']); if (empty($gallery)) { return get_string('externalnotsupported', 'blocktype.file/gallery'); } $url = isset($gallery['url']) ? hsc($gallery['url']) : null; $type = isset($gallery['type']) ? hsc($gallery['type']) : null; $var1 = isset($gallery['var1']) ? hsc($gallery['var1']) : null; $var2 = isset($gallery['var2']) ? hsc($gallery['var2']) : null; switch ($type) { case 'widget': /***************************** Roy Tanck's FLICKR WIDGET for Flickr RSS & Picasa RSS http://www.roytanck.com/get-my-flickr-widget/ *****************************/ $widget_sizes = array(100, 200, 300); $width = self::find_nearest($widget_sizes, $width); $images = urlencode(str_replace('&', '&', $url)); $template = 'imagecloud'; break; case 'picasa': // Slideshow if ($style == 1) { $picasa_show_sizes = array(144, 288, 400, 600, 800); $width = self::find_nearest($picasa_show_sizes, $width); $height = round($width * 0.75); $images = array('user' => $var1, 'gallery' => $var2); $template = 'picasashow'; } // Thumbnails else { $picasa_thumbnails = array(32, 48, 64, 72, 104, 144, 150, 160); $width = self::find_nearest($picasa_thumbnails, $width); // If the Thumbnails should be Square... if ($style == 2) { $small = 's' . $width . '-c'; $URL = 'http://picasaweb.google.com/data/feed/api/user/' . $var1 . '/album/' . $var2 . '?kind=photo&thumbsize=' . $width . 'c'; } else { $small = 's' . $width; $URL = 'http://picasaweb.google.com/data/feed/api/user/' . $var1 . '/album/' . $var2 . '?kind=photo&thumbsize=' . $width; } $big = 's' . get_config_plugin('blocktype', 'gallery', 'previewwidth'); $xmlDoc = new DOMDocument('1.0', 'UTF-8'); $config = array( CURLOPT_URL => $URL, CURLOPT_RETURNTRANSFER => true, ); $result = mahara_http_request($config); $xmlDoc->loadXML($result->data); $photos = $xmlDoc->getElementsByTagNameNS('http://search.yahoo.com/mrss/', 'group'); foreach ($photos as $photo) { $children = $photo->cloneNode(true); $thumb = $children->getElementsByTagNameNS('http://search.yahoo.com/mrss/', 'thumbnail')->item(0)->getAttribute('url'); $description = null; if (isset($children->getElementsByTagNameNS('http://search.yahoo.com/mrss/', 'description')->item(0)->firstChild->nodeValue)) { $description = $children->getElementsByTagNameNS('http://search.yahoo.com/mrss/', 'description')->item(0)->firstChild->nodeValue; } $images[] = array( 'link' => str_replace($small, $big, $thumb), 'source' => $thumb, 'title' => $description, 'slimbox2' => $slimbox2attr ); } } break; case 'flickr': // Slideshow if ($style == 1) { $flickr_show_sizes = array(400, 500, 700, 800); $width = self::find_nearest($flickr_show_sizes, $width); $height = round($width * 0.75); $images = array('user' => $var1, 'gallery' => $var2); $template = 'flickrshow'; } // Thumbnails else { $width = 75; // Currently only thumbnail size, that Flickr supports $api_key = get_config_plugin('blocktype', 'gallery', 'flickrapikey'); $URL = 'http://api.flickr.com/services/rest/?method=flickr.photosets.getPhotos&extras=url_sq,url_t&photoset_id=' . $var2 . '&api_key=' . $api_key; $xmlDoc = new DOMDocument('1.0', 'UTF-8'); $config = array( CURLOPT_URL => $URL, CURLOPT_RETURNTRANSFER => true, ); $result = mahara_http_request($config); $xmlDoc->loadXML($result->data); $photos = $xmlDoc->getElementsByTagName('photo'); foreach ($photos as $photo) { // If the Thumbnails should be Square... if ($style == 2) { $thumb = $photo->getAttribute('url_sq'); $link = str_replace('_s.jpg', '_b.jpg', $thumb); } else { $thumb = $photo->getAttribute('url_t'); $link = str_replace('_t.jpg', '_b.jpg', $thumb); } $description = $photo->getAttribute('title'); $images[] = array( 'link' => $link, 'source' => $thumb, 'title' => $description, 'slimbox2' => $slimbox2attr ); } } break; case 'panoramio': // Slideshow if ($style == 1) { $height = round($width * 0.75); $images = array('user' => $var1); $template = 'panoramioshow'; } // Thumbnails else { $copyright = get_string('panoramiocopyright', 'blocktype.file/gallery'); $URL = 'http://www.panoramio.com/map/get_panoramas.php?set=' . $var1 . '&from=0&to=50&size=original&mapfilter=true'; $config = array( CURLOPT_URL => $URL, CURLOPT_RETURNTRANSFER => true, ); $result = mahara_http_request($config); $data = json_decode($result->data, true); foreach ($data['photos'] as $photo) { $link = str_replace('/original/', '/large/', $photo['photo_file_url']); // If the Thumbnails should be Square... if ($style == 2) { $thumb = str_replace('/original/', '/square/', $photo['photo_file_url']); $width = 60; // Currently only square thumbnail size, that Panoramio supports } else { $thumb = str_replace('/original/', '/thumbnail/', $photo['photo_file_url']); } $title = (!empty($photo['photo_title']) ? $photo['photo_title'] : get_string('Photo', 'blocktype.file/gallery')); $description = '' . $title . '' . ' ' . get_string('by', 'blocktype.file/gallery') . ' ' . '' . $photo['owner_name'] . ''; $images[] = array( 'link' => $link, 'source' => $thumb, 'title' => $description, 'slimbox2' => $slimbox2attr ); } } break; case 'photobucket': // Slideshow if ($style == 1) { $height = round($width * 0.75); $images = array('url' => $url, 'user' => $var1, 'album' => $var2); $template = 'photobucketshow'; } // Thumbnails else { $consumer_key = get_config_plugin('blocktype', 'gallery', 'pbapikey'); // PhotoBucket API key $consumer_secret = get_config_plugin('blocktype', 'gallery', 'pbapiprivatekey'); //PhotoBucket API private key $oauth_signature_method = 'HMAC-SHA1'; $oauth_version = '1.0'; $oauth_timestamp = time(); $mt = microtime(); $rand = mt_rand(); $oauth_nonce = md5($mt . $rand); $method = 'GET'; $albumname = $var1 . '/' . $var2; $api_url = 'http://api.photobucket.com/album/' . urlencode($albumname); $params = null; $paramstring = 'oauth_consumer_key=' . $consumer_key . '&oauth_nonce=' . $oauth_nonce . '&oauth_signature_method=' . $oauth_signature_method . '&oauth_timestamp=' . $oauth_timestamp . '&oauth_version=' . $oauth_version; $base = urlencode($method) . '&' . urlencode($api_url) . '&' . urlencode($paramstring); $oauth_signature = base64_encode(hash_hmac('sha1', $base, $consumer_secret.'&', true)); $URL = $api_url . '?' . $paramstring . '&oauth_signature=' . urlencode($oauth_signature); $xmlDoc = new DOMDocument('1.0', 'UTF-8'); $config = array( CURLOPT_URL => $URL, CURLOPT_HEADER => false, CURLOPT_RETURNTRANSFER => true, ); $result = mahara_http_request($config); $xmlDoc->loadXML($result->data); $xmlDoc2 = new DOMDocument('1.0', 'UTF-8'); $config2 = array( CURLOPT_URL => $xmlDoc->getElementsByTagName('url')->item(0)->firstChild->nodeValue, CURLOPT_HEADER => false, CURLOPT_RETURNTRANSFER => true, ); $result2 = mahara_http_request($config2); $xmlDoc2->loadXML($result->data); $photos = $xmlDoc2->getElementsByTagName('media'); foreach ($photos as $photo) { $children = $photo->cloneNode(true); $link = $children->getElementsByTagName('url')->item(0)->firstChild->nodeValue; $thumb = $children->getElementsByTagName('thumb')->item(0)->firstChild->nodeValue; $description = null; if (isset($children->getElementsByTagName('description')->item(0)->firstChild->nodeValue)) { $description = $children->getElementsByTagName('description')->item(0)->firstChild->nodeValue; } $images[] = array( 'link' => $link, 'source' => $thumb, 'title' => $description, 'slimbox2' => $slimbox2attr ); } } break; case 'windowslive': // Slideshow if ($style == 1) { $images = array('url' => $url, 'user' => $var1, 'album' => $var2); $template = 'windowsliveshow'; } // Thumbnails else { $config = array( CURLOPT_URL => str_replace(' ', '%20', $url), CURLOPT_HEADER => false, CURLOPT_RETURNTRANSFER => true, ); $result = mahara_http_request($config); $data = $result->data; // Extract data about images and thumbs from HTML source - hack! preg_match_all("#previewImageUrl: '([a-zA-Z0-9\_\-\.\\\/]+)'#", $data, $photos); preg_match_all("#thumbnailImageUrl: '([a-zA-Z0-9\_\-\.\\\/]+)'#", $data, $thumbs); for ($i = 0; $i < sizeof($photos[1]); $i++) { $images[] = array( 'link' => str_replace(array('\x3a','\x2f','\x25','\x3fpsid\x3d1'), array(':','/','%',''), $photos[1][$i]), 'source' => str_replace(array('\x3a','\x2f','\x25','\x3fpsid\x3d1'), array(':','/','%',''), $thumbs[1][$i]), 'title' => null, 'slimbox2' => $slimbox2attr ); } } break; } } else { $artefactids = array(); if (isset($configdata['select']) && $configdata['select'] == 1 && is_array($configdata['artefactids'])) { $artefactids = $configdata['artefactids']; } else if (!empty($configdata['artefactid'])) { // Get descendents of this folder. $artefactids = artefact_get_descendants(array(intval($configdata['artefactid']))); } // This can be either an image or profileicon. They both implement // render_self foreach ($artefactids as $artefactid) { $image = $instance->get_artefact_instance($artefactid); if ($image instanceof ArtefactTypeProfileIcon) { $src = get_config('wwwroot') . 'thumb.php?type=profileiconbyid&id=' . $artefactid; $description = $image->get('title'); } else if ($image instanceof ArtefactTypeImage) { $src = get_config('wwwroot') . 'artefact/file/download.php?file=' . $artefactid; $src .= '&view=' . $instance->get('view'); $description = $image->get('description'); } else { continue; } if ($slimbox2) { $link = $src . '&maxwidth=' . get_config_plugin('blocktype', 'gallery', 'previewwidth'); } else { $link = get_config('wwwroot') . 'view/artefact.php?artefact=' . $artefactid . '&view=' . $instance->get('view'); } // If the Thumbnails are Square or not... if ($style == 2) { $src .= '&size=' . $width . 'x' . $width; } else { $src .= '&maxwidth=' . $width; } $images[] = array( 'link' => $link, 'source' => $src, 'title' => $image->get('description'), 'slimbox2' => $slimbox2attr ); } } $smarty = smarty_core(); $smarty->assign('instanceid', $instance->get('id')); $smarty->assign('count', count($images)); $smarty->assign('images', $images); $smarty->assign('width', $width); if (isset($height)) { $smarty->assign('height', $height); } if (isset($needsapikey)) { $smarty->assign('needsapikey', $needsapikey); } $smarty->assign('frame', get_config_plugin('blocktype', 'gallery', 'photoframe')); $smarty->assign('copyright', $copyright); return $smarty->fetch('blocktype:gallery:' . $template . '.tpl'); } public static function has_config() { return true; } public static function get_config_options() { $elements = array(); $elements['gallerysettings'] = array( 'type' => 'fieldset', 'legend' => get_string('gallerysettings', 'blocktype.file/gallery'), 'collapsible' => true, 'elements' => array( 'useslimbox2' => array( 'type' => 'checkbox', 'title' => get_string('useslimbox2', 'blocktype.file/gallery'), 'description' => get_string('useslimbox2desc', 'blocktype.file/gallery'), 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'useslimbox2'), ), 'photoframe' => array( 'type' => 'checkbox', 'title' => get_string('photoframe', 'blocktype.file/gallery'), 'description' => get_string('photoframedesc', 'blocktype.file/gallery'), 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'photoframe'), ), 'previewwidth' => array( 'type' => 'text', 'size' => 4, 'title' => get_string('previewwidth', 'blocktype.file/gallery'), 'description' => get_string('previewwidthdesc', 'blocktype.file/gallery'), 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'previewwidth'), 'rules' => array('integer' => true, 'minvalue' => 16, 'maxvalue' => 1600), ) ), ); $elements['flickrsettings'] = array( 'type' => 'fieldset', 'legend' => get_string('flickrsettings', 'blocktype.file/gallery'), 'collapsible' => true, 'collapsed' => true, 'elements' => array( 'flickrapikey' => array( 'type' => 'text', 'title' => get_string('flickrapikey', 'blocktype.file/gallery'), 'size' => 40, // Flickr API key is actually 32 characters long 'description' => get_string('flickrapikeydesc', 'blocktype.file/gallery'), 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'flickrapikey'), ), ), ); $elements['photobucketsettings'] = array( 'type' => 'fieldset', 'legend' => get_string('pbsettings', 'blocktype.file/gallery'), 'collapsible' => true, 'collapsed' => true, 'elements' => array( 'pbapikey' => array( 'type' => 'text', 'title' => get_string('pbapikey', 'blocktype.file/gallery'), 'size' => 20, // PhotoBucket API key is actually 9 characters long 'description' => get_string('pbapikeydesc', 'blocktype.file/gallery'), 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'pbapikey'), ), 'pbapiprivatekey' => array( 'type' => 'text', 'title' => get_string('pbapiprivatekey', 'blocktype.file/gallery'), 'size' => 40, // PhotoBucket API private key is actually 32 characters long 'defaultvalue' => get_config_plugin('blocktype', 'gallery', 'pbapiprivatekey'), ), ), ); return array( 'elements' => $elements, ); } public static function save_config_options($values) { set_config_plugin('blocktype', 'gallery', 'useslimbox2', (int)$values['useslimbox2']); set_config_plugin('blocktype', 'gallery', 'photoframe', (int)$values['photoframe']); set_config_plugin('blocktype', 'gallery', 'previewwidth', (int)$values['previewwidth']); set_config_plugin('blocktype', 'gallery', 'flickrapikey', $values['flickrapikey']); set_config_plugin('blocktype', 'gallery', 'pbapikey', $values['pbapikey']); set_config_plugin('blocktype', 'gallery', 'pbapiprivatekey', $values['pbapiprivatekey']); } public static function postinst($prevversion) { if ($prevversion == 0) { set_config_plugin('blocktype', 'gallery', 'useslimbox2', 1); // Use Slimbox 2 by default set_config_plugin('blocktype', 'gallery', 'photoframe', 1); // Show frame around photos set_config_plugin('blocktype', 'gallery', 'previewwidth', 1024); // Maximum photo width for slimbox2 preview } } public static function has_instance_config() { return true; } public static function instance_config_form($instance) { $configdata = $instance->get('configdata'); safe_require('artefact', 'file'); $instance->set('artefactplugin', 'file'); $user = $instance->get('view_obj')->get('owner'); $select_options = array( 0 => get_string('selectfolder', 'blocktype.file/gallery'), 1 => get_string('selectimages', 'blocktype.file/gallery'), 2 => get_string('selectexternal', 'blocktype.file/gallery'), ); $style_options = array( 0 => get_string('stylethumbs', 'blocktype.file/gallery'), 2 => get_string('stylesquares', 'blocktype.file/gallery'), 1 => get_string('styleslideshow', 'blocktype.file/gallery'), ); if (isset($configdata['select']) && $configdata['select'] == 1) { $imageids = isset($configdata['artefactids']) ? $configdata['artefactids'] : array(); $imageselector = self::imageselector($instance, $imageids); $folderselector = self::folderselector($instance, null, 'hidden'); $externalurl = self::externalurl($instance, null, 'hidden'); } else if (isset($configdata['select']) && $configdata['select'] == 2) { $imageselector = self::imageselector($instance, null, 'hidden'); $folderselector = self::folderselector($instance, null, 'hidden'); $url = isset($configdata['external']) ? urldecode($configdata['external']) : null; $externalurl = self::externalurl($instance, $url); } else { $imageselector = self::imageselector($instance, null, 'hidden'); $folderid = !empty($configdata['artefactid']) ? array(intval($configdata['artefactid'])) : null; $folderselector = self::folderselector($instance, $folderid); $externalurl = self::externalurl($instance, null, 'hidden'); } return array( 'user' => array( 'type' => 'hidden', 'value' => $user, ), 'select' => array( 'type' => 'radio', 'title' => get_string('select', 'blocktype.file/gallery'), 'options' => $select_options, 'defaultvalue' => (isset($configdata['select'])) ? $configdata['select'] : 0, 'separator' => ' ', ), 'images' => $imageselector, 'folder' => $folderselector, 'external' => $externalurl, 'style' => array( 'type' => 'radio', 'title' => get_string('style', 'blocktype.file/gallery'), 'options' => $style_options, 'defaultvalue' => (isset($configdata['style'])) ? $configdata['style'] : 2, // Square thumbnails should be default... 'separator' => ' ', ), 'width' => array( 'type' => 'text', 'title' => get_string('width', 'blocktype.file/gallery'), 'size' => 3, 'description' => get_string('widthdescription', 'blocktype.file/gallery'), 'rules' => array( 'minvalue' => 16, 'maxvalue' => get_config('imagemaxwidth'), ), 'defaultvalue' => (isset($configdata['width'])) ? $configdata['width'] : '75', ), ); } public static function instance_config_save($values) { if ($values['select'] == 0) { $values['artefactid'] = $values['folder']; unset($values['artefactids']); unset($values['external']); } else if ($values['select'] == 1) { $values['artefactids'] = $values['images']; unset($values['artefactid']); unset($values['external']); } else if ($values['select'] == 2) { unset($values['artefactid']); unset($values['artefactids']); } unset($values['folder']); unset($values['images']); return $values; } public static function imageselector(&$instance, $default=array(), $class=null) { $element = ArtefactTypeFileBase::blockconfig_filebrowser_element($instance, $default); $element['title'] = get_string('Images', 'artefact.file'); $element['name'] = 'images'; if ($class) { $element['class'] = $class; } $element['config']['selectone'] = false; $element['filters'] = array( 'artefacttype' => array('image', 'profileicon'), ); return $element; } public static function folderselector(&$instance, $default=array(), $class=null) { $element = ArtefactTypeFileBase::blockconfig_filebrowser_element($instance, $default); $element['title'] = get_string('folder', 'artefact.file'); $element['name'] = 'folder'; if ($class) { $element['class'] = $class; } $element['config']['upload'] = false; $element['config']['selectone'] = true; $element['config']['selectfolders'] = true; $element['filters'] = array( 'artefacttype' => array('folder'), ); return $element; } public static function externalurl(&$instance, $default=null, $class=null) { $element['title'] = get_string('externalgalleryurl', 'blocktype.file/gallery'); $element['name'] = 'external'; $element['type'] = 'textarea'; if ($class) { $element['class'] = $class; } $element['rows'] = 5; $element['cols'] = 76; $element['defaultvalue'] = $default; $element['description'] = ''. get_string('externalgalleryurldesc', 'blocktype.file/gallery') . self::get_supported_external_galleries() . ''; $element['help'] = true; return $element; } private static function make_gallery_url($url) { static $embedsources = array( // PicasaWeb Album (RSS) - for Roy Tanck's widget array( 'match' => '#.*picasaweb.google.([a-zA-Z]{3}).*user\/([a-zA-Z0-9\_\-\=\&\.\/\:\%]+)\/albumid\/(\d+).*#', 'url' => 'http://picasaweb.google.$1/data/feed/base/user/$2/albumid/$3?alt=rss&kind=photo', 'type' => 'widget', 'var1' => '$2', 'var2' => '$3', ), // PicasaWeb Album (embed code) array( 'match' => '#.*picasaweb.google.([a-zA-Z]{3})\/s\/c.*picasaweb.google.([a-zA-Z]{3})\/([a-zA-Z0-9\_\-\.]+)\/([a-zA-Z0-9\_\-\=\&\.\/\:\%]+).*#', 'url' => 'http://picasaweb.google.$2', 'type' => 'picasa', 'var1' => '$3', 'var2' => '$4', ), // PicasaWeb Album (direct link) array( 'match' => '#.*picasaweb.google.([a-zA-Z]{3})\/([a-zA-Z0-9\_\-\.]+)\/([a-zA-Z0-9\_\-\=\&\.\/\:\%]+).*#', 'url' => 'http://picasaweb.google.$1', 'type' => 'picasa', 'var1' => '$2', 'var2' => '$3', ), // Flickr Set (RSS) - for Roy Tanck's widget array( 'match' => '#.*api.flickr.com.*set=(\d+).*nsid=([a-zA-Z0-9\@]+).*#', 'url' => 'http://api.flickr.com/services/feeds/photoset.gne?set=$1&nsid=$2', 'type' => 'widget', 'var1' => '$2', 'var2' => '$1', ), // Flickr Set (direct link) array( 'match' => '#.*www.flickr.com/photos/([a-zA-Z0-9\_\-\.\@]+).*/sets/([0-9]+).*#', 'url' => 'http://www.flickr.com/photos/$1/sets/$2/', 'type' => 'flickr', 'var1' => '$1', 'var2' => '$2', ), // Panoramio User Photos (direct link) array( 'match' => '#.*www.panoramio.com/user/(\d+).*#', 'url' => 'http://www.panoramio.com/user/$1/', 'type' => 'panoramio', 'var1' => '$1', 'var2' => null, ), // Photobucket User Photos (direct link) array( 'match' => '#.*([a-zA-Z0-9]+).photobucket.com/albums/([a-zA-Z0-9]+)/([a-zA-Z0-9\.\,\:\;\@\-\_\+\ ]+).*#', 'url' => 'http://$1.photobucket.com/albums/$2/$3', 'type' => 'photobucket', 'var1' => '$3', 'var2' => null, ), // Photobucket User Album Photos (direct link) array( 'match' => '#.*([a-zA-Z0-9]+).photobucket.com/albums/([a-zA-Z0-9]+)/([a-zA-Z0-9\.\,\:\;\@\-\_\+\ ]+)/([a-zA-Z0-9\.\,\:\;\@\-\_\+\ ]*).*#', 'url' => 'http://$1.photobucket.com/albums/$2/$3/$4', 'type' => 'photobucket', 'var1' => '$3', 'var2' => '$4', ), // Windows Live Photo Gallery (MUST be a direct link to one of the photos in the album!) // This is a hack - in order to show photos from the album, we require a direct link to one of the photos. array( 'match' => '#.*cid-([a-zA-Z0-9]+).photos.live.com/self.aspx/([a-zA-Z0-9\.\,\:\;\@\-\_\+\%\ ]+)/([a-zA-Z0-9\,\:\;\@\-\_\+\%\ ]+).(gif|png|jpg|jpeg)*#', 'url' => 'http://cid-$1.photos.live.com/self.aspx/$2/$3.$4', 'type' => 'windowslive', 'var1' => 'cid-$1', 'var2' => '$2', ), ); foreach ($embedsources as $source) { $url = htmlspecialchars_decode($url); // convert & back to &, etc. if (preg_match($source['match'], $url)) { $images_url = preg_replace($source['match'], $source['url'], $url); $images_type = $source['type']; $images_var1 = preg_replace($source['match'], $source['var1'], $url); $images_var2 = preg_replace($source['match'], $source['var2'], $url); return array('url' => $images_url, 'type' => $images_type, 'var1' => $images_var1, 'var2' => $images_var2); } } return array(); } /** * Returns a block of HTML that the Gallery block can use to list * which external galleries or photo services are supported. */ private static function get_supported_external_galleries() { $smarty = smarty_core(); $smarty->assign('wwwroot', get_config('wwwroot')); if (stripos(get_config('wwwroot'), 'https') === 0) { $smarty->assign('protocol', 'https'); } else { $smarty->assign('protocol', 'http'); } return $smarty->fetch('blocktype:gallery:supported.tpl'); } // Function to find nearest value (in array of values) to given value // e.g.: user defined thumbnail width is 75, abvaliable picasa thumbnails are array(32, 48, 64, 72, 104, 144, 150, 160) // so this function should return 72 (which is nearest form available values) // Function found at http://www.sitepoint.com/forums/showthread.php?t=537541 public static function find_nearest($values, $item) { if (in_array($item,$values)) { $out = $item; } else { sort($values); $length = count($values); for ($i=0; $i<$length; $i++) { if ($values[$i] > $item) { if ($i == 0) { return $values[$i]; } $out = ($item - $values[$i-1]) > ($values[$i]-$item) ? $values[$i] : $values[$i-1]; break; } } } if (!isset($out)) { $out = end($values); } return $out; } public static function artefactchooser_element($default=null) { } public static function default_copy_type() { return 'full'; } }
__label__pos
0.902763
Namespaces Variants Views Actions fread From cppreference.com < c‎ | io     File input/output Functions File access Direct input/output fread Unformatted input/output (C95)(C95) (C95) (C95)(C95) (C95) (C95) Formatted input Formatted output File positioning Error handling Operations on files   Defined in header <stdio.h> size_t fread( void          *buffer, size_t size, size_t count,               FILE          *stream ); (until C99) size_t fread( void *restrict buffer, size_t size, size_t count,               FILE *restrict stream ); (since C99) Reads up to count objects into the array buffer from the given input stream stream as if by calling fgetc size times for each object, and storing the results, in the order obtained, into the successive positions of buffer, which is reinterpreted as an array of unsigned char. The file position indicator for the stream is advanced by the number of characters read. If an error occurs, the resulting value of the file position indicator for the stream is indeterminate. If a partial element is read, its value is indeterminate. Contents [edit] Parameters buffer - pointer to the array where the read objects are stored size - size of each object in bytes count - the number of the objects to be read stream - the stream to read [edit] Return value Number of objects read successfully, which may be less than count if an error or end-of-file condition occurs. If size or count is zero, fread returns zero and performs no other action. [edit] Example #include <stdio.h>   enum { SIZE = 5 }; int main(void) { double a[SIZE] = {1.,2.,3.,4.,5.}; FILE *fp = fopen("test.bin", "wb"); // must use binary mode fwrite(a, sizeof *a, SIZE, fp); // writes an array of doubles fclose(fp);   double b[SIZE]; fp = fopen("test.bin","rb"); size_t ret_code = fread(b, sizeof *b, SIZE, fp); // reads an array of doubles if(ret_code == SIZE) { puts("Array read successfully, contents: "); for(int n = 0; n < SIZE; ++n) printf("%f ", b[n]); putchar('\n'); } else { // error handling if (feof(fp)) printf("Error reading test.bin: unexpected end of file\n"); else if (ferror(fp)) { perror("Error reading test.bin"); } } } Output: 1.000000 2.000000 3.000000 4.000000 5.000000 [edit] References • C11 standard (ISO/IEC 9899:2011): • 7.21.8.1 The fread function (p: 335) • C99 standard (ISO/IEC 9899:1999): • 7.19.8.1 The fread function (p: 301) • C89/C90 standard (ISO/IEC 9899:1990): • 4.9.8.1 The fread function [edit] See also reads formatted input from stdin, a file stream or a buffer (function) [edit] gets a character string from a file stream (function) [edit] writes to a file (function) [edit]
__label__pos
0.95306
Insights Infographic | The Physical Internet Schermafbeelding 2022-02-03 om 13.54.18 Physical Internet is not something on the cloud, but rather a structure made of physical objects: routers, cables, antennas, internet exchange points and data centers are just some of the elements that make this communication possible. The visualization focuses on showing the physical structure and the actual number of three main objects: data centers, where data is organized and stored; internet exchange points, that allow different service providers to exchange internet traffic; and, submarine cables, that carry telecommunication signals across oceans and seas. By looking at the visualization, one can easily see the difference between the countries who are connected and the ones which are being left out. For example, the United States of America have the greatest number of internet exchange points, data centers and submarine cables, which makes them the most connected country in the world. The visualization was produced using a quantitative approach for analyzing the data. Commscope Contact Infographic | The Physical Internet Please, register or sign in to continue using the site. Register It's free!
__label__pos
0.658285
Source code for statsmodels.stats.multitest '''Multiple Testing and P-Value Correction Author: Josef Perktold License: BSD-3 ''' import numpy as np from statsmodels.stats._knockoff import RegressionFDR __all__ = ['fdrcorrection', 'fdrcorrection_twostage', 'local_fdr', 'multipletests', 'NullDistribution', 'RegressionFDR'] # ============================================== # # Part 1: Multiple Tests and P-Value Correction # # ============================================== def _ecdf(x): '''no frills empirical cdf used in fdrcorrection ''' nobs = len(x) return np.arange(1,nobs+1)/float(nobs) multitest_methods_names = {'b': 'Bonferroni', 's': 'Sidak', 'h': 'Holm', 'hs': 'Holm-Sidak', 'sh': 'Simes-Hochberg', 'ho': 'Hommel', 'fdr_bh': 'FDR Benjamini-Hochberg', 'fdr_by': 'FDR Benjamini-Yekutieli', 'fdr_tsbh': 'FDR 2-stage Benjamini-Hochberg', 'fdr_tsbky': 'FDR 2-stage Benjamini-Krieger-Yekutieli', 'fdr_gbs': 'FDR adaptive Gavrilov-Benjamini-Sarkar' } _alias_list = [['b', 'bonf', 'bonferroni'], ['s', 'sidak'], ['h', 'holm'], ['hs', 'holm-sidak'], ['sh', 'simes-hochberg'], ['ho', 'hommel'], ['fdr_bh', 'fdr_i', 'fdr_p', 'fdri', 'fdrp'], ['fdr_by', 'fdr_n', 'fdr_c', 'fdrn', 'fdrcorr'], ['fdr_tsbh', 'fdr_2sbh'], ['fdr_tsbky', 'fdr_2sbky', 'fdr_twostage'], ['fdr_gbs'] ] multitest_alias = {} for m in _alias_list: multitest_alias[m[0]] = m[0] for a in m[1:]: multitest_alias[a] = m[0] [docs]def multipletests(pvals, alpha=0.05, method='hs', is_sorted=False, returnsorted=False): """ Test results and p-value correction for multiple tests Parameters ---------- pvals : array_like, 1-d uncorrected p-values. Must be 1-dimensional. alpha : float FWER, family-wise error rate, e.g. 0.1 method : str Method used for testing and adjustment of pvalues. Can be either the full name or initial letters. Available methods are: - `bonferroni` : one-step correction - `sidak` : one-step correction - `holm-sidak` : step down method using Sidak adjustments - `holm` : step-down method using Bonferroni adjustments - `simes-hochberg` : step-up method (independent) - `hommel` : closed method based on Simes tests (non-negative) - `fdr_bh` : Benjamini/Hochberg (non-negative) - `fdr_by` : Benjamini/Yekutieli (negative) - `fdr_tsbh` : two stage fdr correction (non-negative) - `fdr_tsbky` : two stage fdr correction (non-negative) is_sorted : bool If False (default), the p_values will be sorted, but the corrected pvalues are in the original order. If True, then it assumed that the pvalues are already sorted in ascending order. returnsorted : bool not tested, return sorted p-values instead of original sequence Returns ------- reject : ndarray, boolean true for hypothesis that can be rejected for given alpha pvals_corrected : ndarray p-values corrected for multiple tests alphacSidak : float corrected alpha for Sidak method alphacBonf : float corrected alpha for Bonferroni method Notes ----- There may be API changes for this function in the future. Except for 'fdr_twostage', the p-value correction is independent of the alpha specified as argument. In these cases the corrected p-values can also be compared with a different alpha. In the case of 'fdr_twostage', the corrected p-values are specific to the given alpha, see ``fdrcorrection_twostage``. The 'fdr_gbs' procedure is not verified against another package, p-values are derived from scratch and are not derived in the reference. In Monte Carlo experiments the method worked correctly and maintained the false discovery rate. All procedures that are included, control FWER or FDR in the independent case, and most are robust in the positively correlated case. `fdr_gbs`: high power, fdr control for independent case and only small violation in positively correlated case **Timing**: Most of the time with large arrays is spent in `argsort`. When we want to calculate the p-value for several methods, then it is more efficient to presort the pvalues, and put the results back into the original order outside of the function. Method='hommel' is very slow for large arrays, since it requires the evaluation of n partitions, where n is the number of p-values. """ import gc pvals = np.asarray(pvals) alphaf = alpha # Notation ? if not is_sorted: sortind = np.argsort(pvals) pvals = np.take(pvals, sortind) ntests = len(pvals) alphacSidak = 1 - np.power((1. - alphaf), 1./ntests) alphacBonf = alphaf / float(ntests) if method.lower() in ['b', 'bonf', 'bonferroni']: reject = pvals <= alphacBonf pvals_corrected = pvals * float(ntests) elif method.lower() in ['s', 'sidak']: reject = pvals <= alphacSidak pvals_corrected = -np.expm1(ntests * np.log1p(-pvals)) elif method.lower() in ['hs', 'holm-sidak']: alphacSidak_all = 1 - np.power((1. - alphaf), 1./np.arange(ntests, 0, -1)) notreject = pvals > alphacSidak_all del alphacSidak_all nr_index = np.nonzero(notreject)[0] if nr_index.size == 0: # nonreject is empty, all rejected notrejectmin = len(pvals) else: notrejectmin = np.min(nr_index) notreject[notrejectmin:] = True reject = ~notreject del notreject # It's eqivalent to 1 - np.power((1. - pvals), # np.arange(ntests, 0, -1)) # but prevents the issue of the floating point precision pvals_corrected_raw = -np.expm1(np.arange(ntests, 0, -1) * np.log1p(-pvals)) pvals_corrected = np.maximum.accumulate(pvals_corrected_raw) del pvals_corrected_raw elif method.lower() in ['h', 'holm']: notreject = pvals > alphaf / np.arange(ntests, 0, -1) nr_index = np.nonzero(notreject)[0] if nr_index.size == 0: # nonreject is empty, all rejected notrejectmin = len(pvals) else: notrejectmin = np.min(nr_index) notreject[notrejectmin:] = True reject = ~notreject pvals_corrected_raw = pvals * np.arange(ntests, 0, -1) pvals_corrected = np.maximum.accumulate(pvals_corrected_raw) del pvals_corrected_raw gc.collect() elif method.lower() in ['sh', 'simes-hochberg']: alphash = alphaf / np.arange(ntests, 0, -1) reject = pvals <= alphash rejind = np.nonzero(reject) if rejind[0].size > 0: rejectmax = np.max(np.nonzero(reject)) reject[:rejectmax] = True pvals_corrected_raw = np.arange(ntests, 0, -1) * pvals pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1] del pvals_corrected_raw elif method.lower() in ['ho', 'hommel']: # we need a copy because we overwrite it in a loop a = pvals.copy() for m in range(ntests, 1, -1): cim = np.min(m * pvals[-m:] / np.arange(1,m+1.)) a[-m:] = np.maximum(a[-m:], cim) a[:-m] = np.maximum(a[:-m], np.minimum(m * pvals[:-m], cim)) pvals_corrected = a reject = a <= alphaf elif method.lower() in ['fdr_bh', 'fdr_i', 'fdr_p', 'fdri', 'fdrp']: # delegate, call with sorted pvals reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha, method='indep', is_sorted=True) elif method.lower() in ['fdr_by', 'fdr_n', 'fdr_c', 'fdrn', 'fdrcorr']: # delegate, call with sorted pvals reject, pvals_corrected = fdrcorrection(pvals, alpha=alpha, method='n', is_sorted=True) elif method.lower() in ['fdr_tsbky', 'fdr_2sbky', 'fdr_twostage']: # delegate, call with sorted pvals reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha, method='bky', is_sorted=True)[:2] elif method.lower() in ['fdr_tsbh', 'fdr_2sbh']: # delegate, call with sorted pvals reject, pvals_corrected = fdrcorrection_twostage(pvals, alpha=alpha, method='bh', is_sorted=True)[:2] elif method.lower() in ['fdr_gbs']: #adaptive stepdown in Gavrilov, Benjamini, Sarkar, Annals of Statistics 2009 ## notreject = pvals > alphaf / np.arange(ntests, 0, -1) #alphacSidak ## notrejectmin = np.min(np.nonzero(notreject)) ## notreject[notrejectmin:] = True ## reject = ~notreject ii = np.arange(1, ntests + 1) q = (ntests + 1. - ii)/ii * pvals / (1. - pvals) pvals_corrected_raw = np.maximum.accumulate(q) #up requirementd pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1] del pvals_corrected_raw reject = pvals_corrected <= alpha else: raise ValueError('method not recognized') if pvals_corrected is not None: #not necessary anymore pvals_corrected[pvals_corrected>1] = 1 if is_sorted or returnsorted: return reject, pvals_corrected, alphacSidak, alphacBonf else: pvals_corrected_ = np.empty_like(pvals_corrected) pvals_corrected_[sortind] = pvals_corrected del pvals_corrected reject_ = np.empty_like(reject) reject_[sortind] = reject return reject_, pvals_corrected_, alphacSidak, alphacBonf [docs]def fdrcorrection(pvals, alpha=0.05, method='indep', is_sorted=False): ''' pvalue correction for false discovery rate. This covers Benjamini/Hochberg for independent or positively correlated and Benjamini/Yekutieli for general or negatively correlated tests. Parameters ---------- pvals : array_like, 1d Set of p-values of the individual tests. alpha : float, optional Family-wise error rate. Defaults to ``0.05``. method : {'i', 'indep', 'p', 'poscorr', 'n', 'negcorr'}, optional Which method to use for FDR correction. ``{'i', 'indep', 'p', 'poscorr'}`` all refer to ``fdr_bh`` (Benjamini/Hochberg for independent or positively correlated tests). ``{'n', 'negcorr'}`` both refer to ``fdr_by`` (Benjamini/Yekutieli for general or negatively correlated tests). Defaults to ``'indep'``. is_sorted : bool, optional If False (default), the p_values will be sorted, but the corrected pvalues are in the original order. If True, then it assumed that the pvalues are already sorted in ascending order. Returns ------- rejected : ndarray, bool True if a hypothesis is rejected, False if not pvalue-corrected : ndarray pvalues adjusted for multiple hypothesis testing to limit FDR Notes ----- If there is prior information on the fraction of true hypothesis, then alpha should be set to ``alpha * m/m_0`` where m is the number of tests, given by the p-values, and m_0 is an estimate of the true hypothesis. (see Benjamini, Krieger and Yekuteli) The two-step method of Benjamini, Krieger and Yekutiel that estimates the number of false hypotheses will be available (soon). Both methods exposed via this function (Benjamini/Hochberg, Benjamini/Yekutieli) are also available in the function ``multipletests``, as ``method="fdr_bh"`` and ``method="fdr_by"``, respectively. See also -------- multipletests ''' pvals = np.asarray(pvals) assert pvals.ndim == 1, "pvals must be 1-dimensional, that is of shape (n,)" if not is_sorted: pvals_sortind = np.argsort(pvals) pvals_sorted = np.take(pvals, pvals_sortind) else: pvals_sorted = pvals # alias if method in ['i', 'indep', 'p', 'poscorr']: ecdffactor = _ecdf(pvals_sorted) elif method in ['n', 'negcorr']: cm = np.sum(1./np.arange(1, len(pvals_sorted)+1)) #corrected this ecdffactor = _ecdf(pvals_sorted) / cm ## elif method in ['n', 'negcorr']: ## cm = np.sum(np.arange(len(pvals))) ## ecdffactor = ecdf(pvals_sorted)/cm else: raise ValueError('only indep and negcorr implemented') reject = pvals_sorted <= ecdffactor*alpha if reject.any(): rejectmax = max(np.nonzero(reject)[0]) reject[:rejectmax] = True pvals_corrected_raw = pvals_sorted / ecdffactor pvals_corrected = np.minimum.accumulate(pvals_corrected_raw[::-1])[::-1] del pvals_corrected_raw pvals_corrected[pvals_corrected>1] = 1 if not is_sorted: pvals_corrected_ = np.empty_like(pvals_corrected) pvals_corrected_[pvals_sortind] = pvals_corrected del pvals_corrected reject_ = np.empty_like(reject) reject_[pvals_sortind] = reject return reject_, pvals_corrected_ else: return reject, pvals_corrected [docs]def fdrcorrection_twostage(pvals, alpha=0.05, method='bky', iter=False, is_sorted=False): '''(iterated) two stage linear step-up procedure with estimation of number of true hypotheses Benjamini, Krieger and Yekuteli, procedure in Definition 6 Parameters ---------- pvals : array_like set of p-values of the individual tests. alpha : float error rate method : {'bky', 'bh') see Notes for details * 'bky' - implements the procedure in Definition 6 of Benjamini, Krieger and Yekuteli 2006 * 'bh' - the two stage method of Benjamini and Hochberg iter : bool Returns ------- rejected : ndarray, bool True if a hypothesis is rejected, False if not pvalue-corrected : ndarray pvalues adjusted for multiple hypotheses testing to limit FDR m0 : int ntest - rej, estimated number of true hypotheses alpha_stages : list of floats A list of alphas that have been used at each stage Notes ----- The returned corrected p-values are specific to the given alpha, they cannot be used for a different alpha. The returned corrected p-values are from the last stage of the fdr_bh linear step-up procedure (fdrcorrection0 with method='indep') corrected for the estimated fraction of true hypotheses. This means that the rejection decision can be obtained with ``pval_corrected <= alpha``, where ``alpha`` is the original significance level. (Note: This has changed from earlier versions (<0.5.0) of statsmodels.) BKY described several other multi-stage methods, which would be easy to implement. However, in their simulation the simple two-stage method (with iter=False) was the most robust to the presence of positive correlation TODO: What should be returned? ''' pvals = np.asarray(pvals) if not is_sorted: pvals_sortind = np.argsort(pvals) pvals = np.take(pvals, pvals_sortind) ntests = len(pvals) if method == 'bky': fact = (1.+alpha) alpha_prime = alpha / fact elif method == 'bh': fact = 1. alpha_prime = alpha else: raise ValueError("only 'bky' and 'bh' are available as method") alpha_stages = [alpha_prime] rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_prime, method='indep', is_sorted=True) r1 = rej.sum() if (r1 == 0) or (r1 == ntests): return rej, pvalscorr * fact, ntests - r1, alpha_stages ri_old = r1 while True: ntests0 = 1.0 * ntests - ri_old alpha_star = alpha_prime * ntests / ntests0 alpha_stages.append(alpha_star) #print ntests0, alpha_star rej, pvalscorr = fdrcorrection(pvals, alpha=alpha_star, method='indep', is_sorted=True) ri = rej.sum() if (not iter) or ri == ri_old: break elif ri < ri_old: # prevent cycles and endless loops raise RuntimeError(" oops - should not be here") ri_old = ri # make adjustment to pvalscorr to reflect estimated number of Non-Null cases # decision is then pvalscorr < alpha (or <=) pvalscorr *= ntests0 * 1.0 / ntests if method == 'bky': pvalscorr *= (1. + alpha) if not is_sorted: pvalscorr_ = np.empty_like(pvalscorr) pvalscorr_[pvals_sortind] = pvalscorr del pvalscorr reject = np.empty_like(rej) reject[pvals_sortind] = rej return reject, pvalscorr_, ntests - ri, alpha_stages else: return rej, pvalscorr, ntests - ri, alpha_stages [docs]def local_fdr(zscores, null_proportion=1.0, null_pdf=None, deg=7, nbins=30, alpha=0): """ Calculate local FDR values for a list of Z-scores. Parameters ---------- zscores : array_like A vector of Z-scores null_proportion : float The assumed proportion of true null hypotheses null_pdf : function mapping reals to positive reals The density of null Z-scores; if None, use standard normal deg : int The maximum exponent in the polynomial expansion of the density of non-null Z-scores nbins : int The number of bins for estimating the marginal density of Z-scores. alpha : float Use Poisson ridge regression with parameter alpha to estimate the density of non-null Z-scores. Returns ------- fdr : array_like A vector of FDR values References ---------- B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups Model. Statistical Science 23:1, 1-22. Examples -------- Basic use (the null Z-scores are taken to be standard normal): >>> from statsmodels.stats.multitest import local_fdr >>> import numpy as np >>> zscores = np.random.randn(30) >>> fdr = local_fdr(zscores) Use a Gaussian null distribution estimated from the data: >>> null = EmpiricalNull(zscores) >>> fdr = local_fdr(zscores, null_pdf=null.pdf) """ from statsmodels.genmod.generalized_linear_model import GLM from statsmodels.genmod.generalized_linear_model import families from statsmodels.regression.linear_model import OLS # Bins for Poisson modeling of the marginal Z-score density minz = min(zscores) maxz = max(zscores) bins = np.linspace(minz, maxz, nbins) # Bin counts zhist = np.histogram(zscores, bins)[0] # Bin centers zbins = (bins[:-1] + bins[1:]) / 2 # The design matrix at bin centers dmat = np.vander(zbins, deg + 1) # Rescale the design matrix sd = dmat.std(0) ii = sd >1e-8 dmat[:, ii] /= sd[ii] start = OLS(np.log(1 + zhist), dmat).fit().params # Poisson regression if alpha > 0: md = GLM(zhist, dmat, family=families.Poisson()).fit_regularized(L1_wt=0, alpha=alpha, start_params=start) else: md = GLM(zhist, dmat, family=families.Poisson()).fit(start_params=start) # The design matrix for all Z-scores dmat_full = np.vander(zscores, deg + 1) dmat_full[:, ii] /= sd[ii] # The height of the estimated marginal density of Z-scores, # evaluated at every observed Z-score. fz = md.predict(dmat_full) / (len(zscores) * (bins[1] - bins[0])) # The null density. if null_pdf is None: f0 = np.exp(-0.5 * zscores**2) / np.sqrt(2 * np.pi) else: f0 = null_pdf(zscores) # The local FDR values fdr = null_proportion * f0 / fz fdr = np.clip(fdr, 0, 1) return fdr [docs]class NullDistribution: """ Estimate a Gaussian distribution for the null Z-scores. The observed Z-scores consist of both null and non-null values. The fitted distribution of null Z-scores is Gaussian, but may have non-zero mean and/or non-unit scale. Parameters ---------- zscores : array_like The observed Z-scores. null_lb : float Z-scores between `null_lb` and `null_ub` are all considered to be true null hypotheses. null_ub : float See `null_lb`. estimate_mean : bool If True, estimate the mean of the distribution. If False, the mean is fixed at zero. estimate_scale : bool If True, estimate the scale of the distribution. If False, the scale parameter is fixed at 1. estimate_null_proportion : bool If True, estimate the proportion of true null hypotheses (i.e. the proportion of z-scores with expected value zero). If False, this parameter is fixed at 1. Attributes ---------- mean : float The estimated mean of the empirical null distribution sd : float The estimated standard deviation of the empirical null distribution null_proportion : float The estimated proportion of true null hypotheses among all hypotheses References ---------- B Efron (2008). Microarrays, Empirical Bayes, and the Two-Groups Model. Statistical Science 23:1, 1-22. Notes ----- See also: http://nipy.org/nipy/labs/enn.html#nipy.algorithms.statistics.empirical_pvalue.NormalEmpiricalNull.fdr """ def __init__(self, zscores, null_lb=-1, null_ub=1, estimate_mean=True, estimate_scale=True, estimate_null_proportion=False): # Extract the null z-scores ii = np.flatnonzero((zscores >= null_lb) & (zscores <= null_ub)) if len(ii) == 0: raise RuntimeError("No Z-scores fall between null_lb and null_ub") zscores0 = zscores[ii] # Number of Z-scores, and null Z-scores n_zs, n_zs0 = len(zscores), len(zscores0) # Unpack and transform the parameters to the natural scale, hold # parameters fixed as specified. def xform(params): mean = 0. sd = 1. prob = 1. ii = 0 if estimate_mean: mean = params[ii] ii += 1 if estimate_scale: sd = np.exp(params[ii]) ii += 1 if estimate_null_proportion: prob = 1 / (1 + np.exp(-params[ii])) return mean, sd, prob from scipy.stats.distributions import norm def fun(params): """ Negative log-likelihood of z-scores. The function has three arguments, packed into a vector: mean : location parameter logscale : log of the scale parameter logitprop : logit of the proportion of true nulls The implementation follows section 4 from Efron 2008. """ d, s, p = xform(params) # Mass within the central region central_mass = (norm.cdf((null_ub - d) / s) - norm.cdf((null_lb - d) / s)) # Probability that a Z-score is null and is in the central region cp = p * central_mass # Binomial term rval = n_zs0 * np.log(cp) + (n_zs - n_zs0) * np.log(1 - cp) # Truncated Gaussian term for null Z-scores zv = (zscores0 - d) / s rval += np.sum(-zv**2 / 2) - n_zs0 * np.log(s) rval -= n_zs0 * np.log(central_mass) return -rval # Estimate the parameters from scipy.optimize import minimize # starting values are mean = 0, scale = 1, p0 ~ 1 mz = minimize(fun, np.r_[0., 0, 3], method="Nelder-Mead") mean, sd, prob = xform(mz['x']) self.mean = mean self.sd = sd self.null_proportion = prob # The fitted null density function [docs] def pdf(self, zscores): """ Evaluates the fitted empirical null Z-score density. Parameters ---------- zscores : scalar or array_like The point or points at which the density is to be evaluated. Returns ------- The empirical null Z-score density evaluated at the given points. """ zval = (zscores - self.mean) / self.sd return np.exp(-0.5*zval**2 - np.log(self.sd) - 0.5*np.log(2*np.pi))
__label__pos
0.992979
Post a New Question calculus posted by on . I have two questions, please help. 1. Uss completing the square to describe the graph of the following function. Support your answers graphically. f(x)= -2x^2+4x+7 please show work. 2. Can someone help me to graph this function? f(x)=x^2-6x+5 Then find the a. vertex b. axis of symmetry c. intercepts, if any • calculus - , f(x) = -2(x^2-2x) + 7 = -2(x^2-2x+1) + 7 + 2 = -2(x-1)^2 + 9 f(x) = x^2-6x+5 = x^2-6x+9 + 5-9 = (x-3)^2 - 4 now you know all there is to know about the parabola. Answer This Question First Name: School Subject: Answer: Related Questions More Related Questions Post a New Question
__label__pos
0.999434
Objects in ES6 ES2015 has added a shorter syntax for creating object literals, in ES5 you would typicaly see something like this: function createUser(first, last){ let fullName = first + " " + last; return { first: first, last: last, fullName: fullName } } //calling the build user fuction let user = createUser("Abdi", "Cagarweyne"); console.log( user.first );//Abdi console.log( user.last );//Abdi Ahmed console.log( user.fullName );//Abdi Ahmed However, as you can see returning objects with keys and variables with the same name looks repetitive and in ES2015 you can use the shorthand to initialize objects: function createUser(first, last){ let fullName = first + " " + last; return { first, last, fullName }//this is equivalent to: return { first: first, last: last, fullName: fullName } } //calling the build user fuction let user = createUser("Abdi", "Cagarweyne"); console.log( user.first );//Abdi console.log( user.last );//Abdi Ahmed console.log( user.fullName );//Abdi Ahmed As you can see this is much cleaner and means that you have less writing to do when initialize objects, if the key and value variables both have the same name, then you can use the shorthand method to make your code more terse and less repetitive. The object initializer shorthand works anywhere a new object is returned for example: let name = "David"; let age = 45; let colleagues = ["George","Michelle"]; let user = { name, age, colleagues };//this is the same as: let user = { name: name, age: age, friends: friends }; console.log( user.name );//Davide console.log( user.age );//45 console.log( user.colleagues );//["George","Michelle"] Object Destructuring ES2015 introduces anothe really cool feature when it comes to assigning values attached to an object or in an array. If we had an array that contained values which we wanted to assign to individual variables we would do the following in ES5: let nums = [1, 2, 3]; let a = nums[0], b = nums[1], c = nums[2]; console.log(a, b, c); //prints 1 2 3 to the console To get the values into individual we had to access each value in the index and assign it to a variable. However, we can achieve the same result by using destructuring in ES2015: let nums = [1, 2, 3]; let [a, b, c] = nums; console.log(a, b, c) This looks much cleaner than the previous version and means that you have less code to write. The same can also be done with objects, for example: let obj = { x: 7, y: 8, z: 9 } let x = obj.x, y = obj.y, z = obj.z; console.log(x, y, z);//prinst 7 8 9 to the console This becomes: let obj = { x: 7, y: 8, z: 9 } let { x, y, z } = obj; console.log(x, y, z);//prinst 7 8 9 to the console This destructuring process might seem confusing at first, as you are used to seeing syntax like [ a, b, c] or { x, y, z } on the right hand side instead of the left. However, what is happeining here is that the pattern has been flipped, so that when you have [ a, b, c] or { x, y, z } on the left hand side of the assignment it means that detach all of the values corresponding on the left handside from the right. Obvisouly, for this to work in object destructuring the variables that you are assigning the values to must match the keys of the object that you are destructuring. For example, the following will not work: let obj = { a: 7, b: 8, c: 9 } let { x, y, z } = obj; console.log(x, y, z);//prints: undefined undefined undefined Also for array destructuring, this will only work if the values on the right are the same number or less than the length of the array, for example: let nums = [1, 2, 3]; let [a] = nums; //only the first element will be assigned to the variable a console.log(a)//1 This results in undefined for variables b and c: let nums = [1]; let [a, b, c] = nums; //b and c are assigned undefined since there isn't a corressponding value in array console.log(a, b, c)//1 undefined undefined Adding functions to an Object Adding functions to an object is something that is done all of the time, and in ES6 it has been made simpler to add a function to an object. Previously, we would declare the property of the object and then use th full function declaration to add a method: let myObj = { prop1: 'Hello', prop2: 'world', fullName: function(firstname, lastname) { let fullName = firstname + ' ' + lastname; return fullName; } } In ES6 the syntax is shortand and made simpler by just declaring the object property followed by the parenthesis and the function body: let myObj = { prop1: 'Hello', prop2: 'world', fullName(firstname, lastname) { let fullName = firstname + ' ' + lastname; return fullName; } } Template strings Another great feature added to ES6 is the use of ttenplate strings. Template strings allow embedded expressions and you can use multi-line strings and string interpolation features with them. Let's have look at some examples to understand better. Say you have a url where you are posting some data and you want to add interpolation with a service id, in ES5 this would be done like this: let url = "/service/"+servid In ES6, you can just use backticks to surround your whole string and add interpolation using the dollar sign and curly braces: let url = `/service/${servid}` This is much cleaner and means that you can actually create complex strings with interpolation without having to use plus signs, line breaks, multiple double quotes etc. If you have ever needed to create multi-line strings before you would have to use line breaks to achieve this. However, in ES6 you simply use the backticks and continue writing you string on the new line without the need for a line break. Let's look at how this is done in ES5 first and then using template strings in ES6: //ES5 console.log("string text line 1\n"+ "string text line 2"); // "string text line 1 // string text line 2" //ES6 console.log(`string text line 1 string text line 2`); // "string text line 1 // string text line 2" Object.assign As a developer, writng flexible and reusabel functions is something that we all must strive to do and the new feature in ES6 object.assign helps in that regard. Let's look at an example scenario where using object.assign will be useful for us. Say we're creating a function that accepts an options parameter, in that it can take 0 or more options as properties, and depending on the options supplied the function will return a different value: function person(name, options = {}) { let age = options.age || 'at least 18'; let address = options.address || 'Shared accommodation'; let occupation = options.occupation || 'Student'; return `${name} is ${age} and is currently at ${address} and their occupation is ${occupation}`; } When calling this function some options might not be specified, so this means that we need to account for this using default values. As you can see from the function, we assign each property of the options object to a variable and then use the double pipe to check for the presence of a value that has been passed in. If there is no value it will return undefined and then we fall back to the default values. the code above is fine as it is, but it requires a bit of brain power to understand and in the long term may be difficult to maintain. let's fix this function using a defaults object and the new feature object.assign: function person(name, options = {}) { let defaults = { age: 'at least 18', address: 'Shared accommodation', occupation: 'Student' } return `${name} is ${defaults.age} and is currently at ${defaults.address} and their occupation is ${defaults.occupation}`; } Now that we have improved upon our function, the next step is to merge options and defaults, and where there are duplicate, those from options must override the properties in the defaults object. This is where the object.assign feature in ES6 comes in really useful to help us. The object.assign feature helps us to copy one or more source objects into a target object and retunrs the target object. The way in which the function works is that it takes a target object as the first argument and then takes any number of subsequent arguments as the source object from which to copy the properties. So the function looks like this: Object.assign(target, source_1, ..., source_n). If Object.assign encounters duplicate properties on source objects, the value from the last source object will always be returned. So, if a property foo was already in the target object and then in the last source there is also another property foo, then this last property will overwrite the last one and this is the one that gets returned. Let's now use the Object.assign function to merge the defaults object with the passed in options object and see the Object.assign in action. function person(name, options = {}) { let defaults = { age: 'at least 18', address: 'Shared accommodation', occupation: 'Student' } let finalOptions = Object.assign({}, defaults, options); return `${name} is ${finalOptions.age} and is currently at ${finalOptions.address} and their occupation is ${finalOptions.occupation}`; } person('abdi', {age: 30}); The way in which you use Object.assign is very straight forward, you create a variable that will hold the return value of the target object and you pass an empty object as the first argument then followed by the source objects that you wish to copy from: let finalOptions = Object.assign({}, defaults, options). One thing to note is that we haven't used the defaults object as the target object, because if we did we would be mutating the defaults object and we would not have access to the original values anymore. It's important that we do not mutate the defaults object because if we wanted to compare the defaults value with the passed in options value then we would have nothing to compare with. Also, notice how there is a variable declared to hold the return value from the Object.assign function, we could have written it like this: let finalOptions; Object.assign(finalOptions, defaults, options); If we wrote it like this then we would not be using the return value from the function, and even though it would still work, the correct way is to assign the return value directly to the finalOptions variable. Arrays Arrays are an important data type that is used extensively, and it is not uncommon to access elements by their index. For example, say we have an array of fruits: let fruits = ['apple', 'grapes', 'banana']; let a = fruits[0]; let b = fruits[1]; let c = fruits[2]; console.log(a, b, c); As you can see from the code above, we can assign each fruit to its own variable using their indices 0, 1, and 2. This is perfectly fine and works, but it is more code than we actually need and if we had more elements in the array we would need to know the index fo each element in order to assign it to a variable, which means that this doesn't scale very well. In ES6, we can assign each fruit to a variable using what's called array destructuring. Array destructuring allows us to write the code in a much better way, similar to the way in which we destructure an object, we can destructure an array. So let's rewrite the example above using array destructuring: let fruits = ['apple', 'grapes', 'banana']; let [a, b, c ] = fruits; console.log(a, b, c);//apple grapes banana Instead of accessing elements by their index, we declare a single line of code between square brackets and assign them to the variables on the left. The JS engine will try to match the number of variables on the left to the number of elements in the array. As you can see from the code above, we assigned the variables a, b and c the values apples, grapes and banana respectively. This code is actually easier to undertand and requires less code. If there are any values that we aren't interested in, we can discard them during the assignment operation: let fruits = ['apple', 'grapes', 'banana']; let [a, , b ] = fruits; console.log(a, b) //apple banana In the exampe above we only store apple and banana into the variables and we have left out grapes. We achieved this using a blank space after the first variable to indicate that we don't want the second element assigned to any variable. When we run the code we only get apples and banana assigned to variables a and b respectively. ###Destructuring and rest parameters We've already learned some cool things that we can do in ES6, and we can combine array destructuring with rest paramaters group values into other arrays. Let's look at an example to see what we mean: let fruits = ['apple', 'grapes', 'banana']; let [first. ...rest] = fruits; console.log(first, rest);//apple, ['grapes', 'banana'] The example above shows array destructuring and the rest parameters in use, we assigned the first element apple to the variable first and then we used the rest parameter with the three dots ... to group all remaining elements into a new array called rest Destructuring from return values There will always be opportunities to use array destructuring in your JS code, and another place where we can use them is when we return values from functions. We can use them to assign to multiple variables at once. Let's see what we mean by looking at some code: function myFruits() { let fruits = ['apple', 'grapes', 'banana']; return fruits; } As you would expect in JS, we can assign the return value to variable: function myFruits() { let fruits = ['apple', 'grapes', 'banana']; return fruits; } let allFruits = myFruits(); Nothing new in the example above, however, using array destructuring we can assign multiple variables at once, just as we did before, from the return value of the function: function myFruits() { let fruits = ['apple', 'grapes', 'banana']; return fruits; } let [a, b, c] = myFruits(); console.log(a, b, c);//apple, grapes, banana The for...of loop The for of loop is a new feature added in ES6, which is a better way of looping over arrays and other iterables. Let's look at an example to understand further. Once gain, say we have an array of fruits: let fruits = ['apple', 'grapes', 'banana']; To loop over the array we can use the for in loop: let fruits = ['apple', 'grapes', 'banana']; for(let i in fruits) { console.log(fruits[i]); } The for in loop returns the index for each element, and it is assigned to the i variable in the loop. We can then use this index variable to access each element of the array. So, there are two steps here, first assigning each index to the i variable and then accessing each element using the index in the variable on the array. Using for of we don't need to use the index to access an element in an array: let fruits = ['apple', 'grapes', 'banana']; To loop over the array we can use the for in loop: let fruits = ['apple', 'grapes', 'banana']; for(let fruit of fruits) { console.log(fruit); } The for of loop reads each element directly from the array and assigns it to the named variable, which is fruit. This is only one step when compared to the for in loop, and this means that we can loop over arrays and other iterables writing less code. Objects and the for...of loop The for of loop cannot be used to iterate over properties of a plain javascript object. So the following will not work: let person = { name: "Abdi", address: "123 JS street Node Avenue", occupation: "JS Developer" } for(let prop of person) { console.log("Property", prop); } If you try to run the code above you will run into a type error: TypeError: person[Symbol.iterator] is not a function. So you migt be asking: when can I use for of without running into errors? We can check to see if the for of loop will work by looking to see if there is a function assigned to the Symbol.iterator property. For the array if we log the type that is assigned to the Symbol.iterator property, we can see that this returns a function: let fruits = ['apple', 'grapes', 'banana']; console.log(typeof fruits[Symbol.iterator]);//function let person = { name: "Abdi", address: "123 JS street Node Avenue", occupation: "JS Developer" } console.log(typeof person[Symbol.iterator]);//undefined If we run the same check on the plain JS object, you will notice that it returns undefined. This means that there is nothing assigned to the property and the obpject will not work with for of loop. Array.find() ES6 also adds new array function that we can use to find a specific element in an array. The array.find() function takes a testing function to return an element that meets this criteria: let services = [ {name: 'nails', activated: false}, {name: 'haircut', activated: true}, {name: 'feet therapy', activated: true} ] Lets say that we wante to find the first service object that was activated, we can use the array.find function to ohelp us get the first object which has activated set to true: let services = [ {name: 'nails', activated: false}, {name: 'haircut', activated: true}, {name: 'feet therapy', activated: true} ] let activated = services.find(service => { return service.activated }); console.log(activated);//{name: 'haircut', activated: true} The find method will return the first object that has activated set to true Maps ES6 introduces a new data structure called maps. Maps are a data structure composed of a collection of key value pairs, which makes them very useful to store simple data. Maps are actually present in other programming languages and are useful to store property values. A Map stores collections of unique keys mapped to values each key is associated with one, and only one value, in order to find a specific value in map, you give it it key and you receive its value in return. Issues with Objects as maps JS developers are first exposed to Maps through objects, you can use objects as key values stores, there are some limitations with this. The main limitation is that you cannot use a non string value as a key. The JS engine always converts object keys to strings and this causes unexpected behavior when you use objects as keys. Let's look at an example to better understand this: let carOne = { make: 'Audi' }; let carTwo = { make: 'Ford' }; lets add a new object to the scene `carAge' that will also be an object but will use the two car objects as keys: let carOne = { make: 'Audi' }; let carTwo = { make: 'Ford' }; let carAge = {}; carAge[carOne] = 3; carAge[carTwo] = 5; console.log(carAge); //{ '[object Object]': 5 } When you look at the console log of the carAge object you will see that it only contains one key which is object Object and a value of 5. Both keys have been converted to strings, and since they were objects, the string that they were converted to was `object object', and this means that only that key is being set in the carAge object. So in otherwords, the last value this set in the object will overwrite all previous values and so on. ###The Map data structure To overcome this limitation in using objects as keys, ES6 introduced Map as a new data structure. The Map object is similar to the JS objects that we are used to, it is a simple key => value data structure. If you want to access the value of a particular key, you just provide that key and in return you get the value. The main difference with Maps is that you can use ANY value as a key or a value andmore importantly, objects are not converted to strings. To see Maps in action, let's make carAge a Map instead of a normal JS object, and use the set method to add keys to the Map. This is different to simply assigning the key in plain JS objects with dot notation or a using the brackets: let carOne = { make: 'Audi' }; let carTwo = { make: 'Ford' }; let carAge = new Map(); carAge.set(carOne, 3); carAge.set(CarTwo, 5); console.log(carAge); //{ '[object Object]': 5 } The set method takes two arguments, a key and a value. As we did before we are using the objects as keys and assiging them their respective ages. To read the values of a map we can't simply use the dot or bracket notation, again we need to use one of its methods that it comes with to manipulate the Map, which is the get method that only takes a key as its only argument. Here's how we read keys off of a map: console.log(carAge.get(carOne));// 3 console.log(carAge.get(carTwo));//5 As you can see when you log the keys, the two values are assigned to different keys in the Map and nothing is converted to string or overwritten. Therefore, in majority of the cases we should not use JS objects as maps, because of their limitations when it comes to using objects as keys. You should use Maps when the keys are unknown until runtime, for example after loading in data from an AJAX call. However, when we are using predefined keys and we know their values before runtime, it is perfectly fine to use normal JS objects. We should also aim to use keys when all the keys and their values are of the same type. This will help keep the maps consistent and easier to work with, as you know what to expect. Iterating Maps with for...of The Map data structure are iterable, this means that we can use the for.. of loop and each run of the loop will return a [key, value] pair for each entry in the Map. Let's create a new Map of cars and add some entries: let cars = new Map(); cars.set("CarOne", "Audi"); cars.set("CarTwo", "Ford"); cars.set("CarThree", "GM"); cars.set("CarFour", "BMW"); We can easily loop through this Map of car using for..of loop and in each loop it will return a key value pair: let cars = new Map(); cars.set("CarOne", "Audi"); cars.set("CarTwo", "Ford"); cars.set("CarThree", "GM"); cars.set("CarFour", "BMW"); for(let [key, value] of cars) { console.log(`${key} = ${value}`); } As you can see from the for of loop, we have used array destructuring to assign the key to a key and value to value respectively and we are accessing these using template strings when we log them out. When we run the code we can see that it prints each entry of the Map to the console successfully. WeakMaps ES6 has also inttroduced another type of data set that is a variation of the Map called WeakMaps. The WeakMap is a special type of Map, and the main difference is that you can only use objects as keys. This means that you can't use primitive data types such as strings, numbers and booleans as the keys in a WeakMap. Let's look at an example where WeakMaps are used: let personOne = {}; let personTwo = {}; let people = new WeakMap(); people.set (personOne, "Abdi"); people.set(personTwo, "David"); console.log(people.get(personOne));//Abdi console.log(people.get(personTwo));//David As you can see from the code above, we can use the same set and get methods as we did with Map. However, if you tried using a string as a key, you will run into an error, which says Invalid value used as weak map key. Besides only allowing objects as keys, WeakMaps are not iterable, you cannot use the for..of loop to iterate over the keys in a WeakMap. You will run into the same error when trying to iterate over objects with a for...of loop. Why do need WeakMaps? The main use for WeakMaps is that they make efficient use of memory, this means that individual entries can be garbage collected while the WeakMap is still in use. They are called 'Weak' because they hold a weak reference to the object that is used as reference for the keys. As long as an object is no longer referenced after it is used, WeakMaps will not prevent the garbage collector from colecting objects that are being used a keys in a WeakMap. This makes efficient use of memory and frees more of it up that can be used else where. Sets Like Maps and WeakMaps, Sets are a new data structure introduced in ES6, to understand why we need Sets in the first place, let's first go back to JS Arrays and see some of the limitations that lead to Sets being added to ES6. Limitations with Array As you know Arrays in JS are simple and easy to use, however one thing that they don't do is enforce uniqueness in the elments that they hold. This means that you can have duplicate entries in an array in JS. So the following array in JS is perfectly fine: let cars = ['Audi', 'Ford', 'Audi', 'Mercedes', 'VW']; console.log(cars.length)//5 If we print the length property of the array we will see that it has a size of 5 items, even though we have duplicate item - audi. So in ES6 if you want to prevent duplicate entries in an array you can use Sets. Sets can store unique values of any type, be it primitive values or object references. You can create Sets in the same way that you create Maps, using the new keyword: let cars = new Set(); Now if you want to add items to a set you use the add method that is available on all instances of a set, insted of array push method: let cars = new Set(); cars.add('Audi'); cars.add('Ford'); cars.add('Mercedes'); cars.add({driver: 'Abdi'}); cars.add('VW'); cars.add('Audi'); console.log('Total no. cars', cars.size);//5 To get the number of items in a set you use the .size propery instead of the .length. You will notice that the duplicate entry of Audi is ignored and the total size is 5 not 6. Sets and for...of As you would expect, Set objects are iterable and we can use the for...of loop and destructuring. Let's see an example of iterating over a set object: let cars = new Set(); cars.add('Audi'); cars.add('Ford'); cars.add('Mercedes'); cars.add({driver: 'Abdi'}); cars.add('VW'); cars.add('Audi'); for(let car of cars) { console.log(car); } Sets and destructuring We can also use destructuring with sets just like we can with normal JS arrays: let cars = new Set(); cars.add('Audi'); cars.add('Ford'); cars.add('Mercedes'); cars.add({driver: 'Abdi'}); cars.add('VW'); cars.add('Audi'); let [a, b, c] = cars; console.log(a, b, c);//Audi, Ford, Mercedes WeakSets Similar to WeakMaps we have WeakSets, and if you recall these are the memory efficient version for Sets. Let's look at an example to see how WeakSets work: let weakCars = new WeakSet(); weakCars.add('Audi'); //error: Invalid value used in weak set If you try to add a string to a WeakSet you will get an error: Invalid value used in weak set, just like WeakMaps, WeakSets only accept objects and nothing else can be stored. So let's add an object instead: let weakCars = new WeakSet(); weakCars.add({driver: 'Abdi'}); let passenger = { name: 'Sarah' }; weakCars.add(passenger); To see if a particular object is in a WeakSet you can use the has() method which returns a boolean, to see whether a WeakSet contains a given object. let weakCars = new WeakSet(); weakCars.add({driver: 'Abdi'}); let passenger = { name: 'Sarah' }; weakCars.add(passenger); console.log(weakCars.has(passenger))// true If you wanted to delete a particular entry in a WeakSet you can use the delete method: let weakCars = new WeakSet(); weakCars.add({driver: 'Abdi'}); let passenger = { name: 'Sarah' }; weakCars.add(passenger); weakCars.delete(passenger); console.log(weakCars.has(passenger))// false WeakSets are different from Sets in a few different ways, first they are not iterable and they offer no methods for reading values from them. When should we use WeakSets There are a limited use cases when WeakSets are actually useful, even though we can't iterate over them or even read values from them. One obvious example is efficient memory usage and to prevent memory leaks. Another instance where WeakSets can be used is when you want to make sure that you do not muatate any data in your app. For example say you have a function that is called when ever a particular link is clicked, and when it is called the function will set a property in an object to true: let carSlides = [ { car: 'Audi', seen: false, image: 'url' }, { car: 'Ford', seen: false, image: 'url' }, { car: 'Mercedes', seen: false, image: 'url' }, { car: 'VW', seen: false, image: 'url' } ]; function clicked(carSlides) { carSlides.forEach(car => { //mutates each object in the carSlides array car.seen = true; } ) } //lets say this is set true when user clicks on a link somewhere let linkClicked = true; if(linkClicked) { clicked(carSlides); } console.log(carSlides) The above is fine, but let's say that you do not want to mutate your data, having immutable object in your code is something that you should try and implement in your code where possible. We can refactor the code above to make use of WeakSets and not make any mutations to the carSlides array: let carSlides = [ { car: 'Audi', seen: false, image: 'url' }, { car: 'Ford', seen: false, image: 'url' }, { car: 'Mercedes', seen: false, image: 'url' }, { car: 'VW', seen: false, image: 'url' } ]; let carsViewed = new WeakSet(); function clicked(carSlides) { carSlides.forEach(car => { //instead of mutating we add the object to the carsViewed WeakSet carsViewed.add(car); } ) } //lets say this is set true when user clicks on a link somewhere let linkClicked = true; if(linkClicked) { clicked(carSlides); } //console.log(carSlides)//still have our object intact without mutations //we can then check to see that we have each car as an object in the WeakSets array for(let car of carSlides) { //check each individiual object is present in the WeakSet console.log(carsViewed.has(car)); //true //other code....... } Even though it seems that we are doing extra work, this is actually making sure that we do not mutate our carSlides array data, and in essence achieve some form of data immutability. What you have to also remember is that the WeakSet does not preven the garbage collector from collecting objects that are no longer being referenced, and in turn creating an efficient use of memory.
__label__pos
0.997046
Calculus on a Parabola 1 A general equation for a parabola is \(f(x)=Ax^2+Bx+C\). 1. Find \(f'(x)\) and \(f'(0)\) 2. Evaluate \(\displaystyle \int_{-h}^h f(x) \, dx\) 2 Assume you have a parabola with points at \((-h, y_0)\), \((0,y_1)\), and \((h,y_2)\). 1. Find values for \(A\), \(B\), and \(C\) in terms of \(h, y_0, y_1\) and \(y_2\) 2. Find \(f'(x)\) and \(f'(0)\) in terms of \(h, y_0, y_1\) and \(y_2\) 3. Evaluate \(\displaystyle \int_{-h}^h f(x) \, dx\) in terms of \(h, y_0, y_1\) and \(y_2\)
__label__pos
1
## Please edit system and help pages ONLY in the master wiki! ## For more information, please see MoinMoin:MoinDev/Translation. ## page was renamed from WikiCourse/01 What is a MoinMoin-wiki? ##master-page: ##master-date: #acl -All:write Default #format wiki #language en <> = What is a wiki? = A wiki (also called !WikiWiki or !WikiWikiWeb) is a collection of websites, which not only can be read, but can also be edited by the users directly and simply. ''wikiwiki'' is Hawaiian and means "fast". The first !WikiWikiWeb was developed and put into operation by Ward Cunningham in 1995. The idea of editable content in the World Wide Web dates from the original ideas of the inventor of the World Wide Web, Tim Berners-Lee. == The wiki-way == * open and cooperative: on many sites, everyone may change everything. * simple and fast: you can enter and save any content, which is available at once. Content is more important than design. * safe: MoinMoin remembers all old page versions. * cross-linked: the information in the wiki is highly linked. * accessible: you only need a browser and a network connection to access the wiki. * flexible: in a wiki you can save many kinds of information, e. g. training courses, transparency lectures and brainstorming. = What is MoinMoin? = MoinMoin is software to run a wiki. It is available under the GPL and implemented in the programming language Python. Contributions can be made by also using the GPL and Python.
__label__pos
0.811054
Skip to content Extension containing an experimental libdweb APIs JavaScript HTML CSS Branch: master Clone or download This branch is 6 commits ahead of mozilla:master. Latest commit Fetching latest commit… Cannot retrieve the latest commit at this time. Files Permalink Type Name Latest commit message Commit time Failed to load latest commit information. .vscode demo flow-typed/npm src/toolkit www .flowconfig .gitignore .npmignore .travis.yml CODE_OF_CONDUCT.md License.md Readme.md package.json yarn.lock Readme.md Status In the process of migration to https://github.com/mozilla/gecko-dev/, once completed repo will get archieved. Please submit new bugs in bugzilla and make them blockers for libdweb metabug libdweb travis package downloads styled with prettier This repository hosts a community effort to implement experimental APIs for Firefox WebExtensions with a goal of enabling dweb protocols in Firefox through browser add-ons. The long term goal of this project is to integrate these APIs into the WebExtensions ecosystem. Participation You can help this effort in following ways: 1. Use these APIs to make something illustrating its value, to build the case for adoption in the core WebExtension API set. 2. Get involved in driving this effort: Help with an API implementation, maintenance, testing, code samples, etc. 3. Help build API adapters to enable seamless integration with existing libraries. 4. Join our IRC channel: #dweb on irc.mozilla.org Status: In active development API Status Protocol Handler 🐥 Service Discovery 🐣 File System 🐣 UDP Socket 🐣 TCP Socket 🐣 • 🥚 : In design phase • 🐣 : Work in progress • 🐥 : Try it out • 🐓 : Usable API overview Note: You can try all the examples after you've cloned the repo and got the toolchain setup by running npm install. You will also need Firefox Nightly to run the demos. Protocol API The Protocol API allows you to handle custom protocols from your Firefox extension. This is different from the existing WebExtensions protocol handler API in that it does not register a website for handling corresponding URLs but rather allows your WebExtension to implement the handler. The following example implements a simple dweb:// protocol. When firefox is navigated to dweb://hello/world, for example, it will invoke your registered handler and pass it a request object containing request URL as request.url string property. Your handler is expected to return a repsonse with a content that is async iterator of ArrayBuffers. In our example we use a respond async generator function to respond with some HTML markup. browser.protocol.registerProtocol("dweb", request => { return { contentType: "text/html", content: respond(request.url) } }) async function* respond(text) { const encoder = new TextEncoder("utf-8") yield encoder.encode("<h1>Hi there!</h1>\n").buffer yield encoder.encode( `<p>You've succesfully loaded <strong>${request.url}</strong><p>` ).buffer } Given that response.content is an async iterator it is also possible to stream response content as this next example illustrates. browser.protocol.registerProtocol("dweb", request => { switch (request.url) { case "dweb://stream/": { return { contentType: "text/html", content: streamRespond(request) } } default: { return { contentType: "text/html", content: respond(request.url) } } } }) async function* streamRespond(request) { const encoder = new TextEncoder("utf-8") yield encoder.encode("<h1>Say Hi to endless stream!</h1>\n").buffer let n = 0 while (true) { await new Promise(resolve => setTimeout(resolve, 1000)) yield encoder.encode(`<p>Chunk #${++n}<p>`).buffer } } You can see the demo of the example above in Firefox Nightly by running following command, and then navigating to dweb://hello/world or dweb://stream/ npm run demo:protocol protocol demo Service Discovery API API provides DNS-Based Service Discovery API as per rfc6763. Following example illustrates how this API can be used to discover available http services in the network. void (async () => { const services = browser.ServiceDiscovery.discover({ type: "dweb", protocol: "tcp" // Must be "tcp" or "udp" }) console.log("Start discovery", services.query) for await (const service of services) { if (service.lost) { console.log("Lost service", service) } else { console.log("Found service", { name: service.name, type: service.type, protocol: service.protocol }) for (const { address, port, host, attributes } of await service.addresses()) { console.log( `Service ${service.name} available at ${host} ${address}:${port}`, attributes ) } } } console.log("End discovery", services.query) })() API also allows you to announce service that others on the network can discover. Following example illustrates that: void (async () => { const service = await browser.ServiceDiscovery.announce({ name: "My dweb service", type: "dweb", protocol: "tcp", // must be "tcp" or "udp" port: 3000, // ommting port will just assign you available one. attributes: { // optional txt records version: "1.0." } }) console.log("Service annouced", { name: service.name, // Note: Colud be different like "My dweb service (2)" type: service.type, protocol: service.protocol, port: service.port, attributes: service.attributes // Will be null if was omitted }) // Wait for a 1 minute and expire service announcement await new Promise(timeout => setTimeout(timeout, 60 * 1000)) await service.expire() console.log(`Service expired`) })() Demo You can try demo WebExtension that discovers and displays http services in your local network when button in the toolbar is clicked. You can run it in Firefox Nightly via the following command npm run demo:discovery discovery button FileSystem API FileSystem API provides access to an OS file system, but restricted to a user chosen directory. Below example illustrates writing a content to a file in user chosen directory. void (async () => { const volume = await browser.FileSystem.mount({ read: true, write: true }) console.log("Mounted", volume) localStorage.setItem("volumeURL", volume.url) const fileURL = new URL("hello.md", volume.url).href const encoder = new TextEncoder() const content = encoder.encode("# Hello World\n").buffer const size = await browser.FileSystem.writeFile(fileURL, content) console.log(`Wrote ${size} bytes to ${fileURL}`) })() Call to FileSystem.mount will notify user that corresponding WebExtension is requesting read / write access to the file system, which user can deny or grant by choosing a directory. If user denies to access then promise returned by mount will be rejected, if user chooses to grant access to a specific directory the promise will resolve to an object like: { url:"file:///Users/user/dweb/", readable:true, writable:true } The rest of the example that writes content into a file should be pretty straight forward. Note: Granted access will be preserved across sessions, and WebExtension could mount same directory without prompting a user again. Following is a more complete example that will either mount directory that user has already granted access to or request access to new directory otherwise. void (async () => { const url = localStorage.getItem("volumeURL") const volume = await browser.FileSystem.mount({ url, read: true }) const fileURL = new URL("hello.md", volume.url).href const file = await browser.FileSystem.open(fileURL, { read: true }) const chunk = await browser.File.read(file, { position: 2, size: 5 }) console.log(`Read file fragment from ${fileURL}`, chunk) const decoder = new TextDecoder() const content = decoder.decode(chunk) console.log(`Decode read fragment`, content) await browser.File.close(file) })() Note: Attempting to mount a URL that user has not previously granted access to will fail without even prompting a user. FileSystem API has many other functions available. You can follow the links for detailed API interface definitions of browser.FileSystem and browser.File You can try demo WebExtension that provides a REPL in the sidebar exposing all of the FileSystem API, which you can run in Firefox Nightly via following command Note: Commands recognized by REPL correspond to the API functions names and all the parameters are names prefixed by -- and followed by value. npm run demo:fs FileSystem UDPSocket API API provides an implementation of UDP Datagram sockets. Follow the link for detailed API interface for browser.UDPSocket which corresponds to UDPSocketManager. There is also a @libdweb/dgram-adapter project that provides nodejs dgram API adapter. Example Following example opens UDP socket on port 41234 that will act as a server and will continuously print incoming messages. void (async () => { const server = await browser.UDPSocket.create({ port: 41234 }) console.log(`listening ${server.address.host}:${server.address.port}`) const decoder = new TextDecoder() for await (const { from, data } of browser.UDPSocket.messages(server)) { console.log(`receive message`) const message = decoder.decode(data) console.log(`server got: ${message} from ${from.host}:${from.port}`) } })() Note: Incoming messages are represented via async iterator which can be consumed via for await, but be aware that messages are not buffered so if you use await inside the for await block chances are you will miss message which will be dropped. Following example opens socket on arbitrary port and then sends a message to the server socket from the above example. void (async () => { const client = await browser.UDPSocket.create({ host: "127.0.0.1" }) console.log(`opened socket ${client.address.host}:${client.address.port}`) const encoder = new TextEncoder() const message = encoder.encode("Hello UDP").buffer await browser.UDPSocket.send(client, "127.0.0.1", 41234, message) })() Note: UDPSocket API unlike one in nodejs is not going to resolve hostnames like "localhost". You need to use WebExtensions dns API to resolve hostnames. Demo P2P Chat Demo You can try demo WebExtension that uses UDP multicasting to do peer-to-peer chat in a firefox sidebar. You can run in Firefox Nightly via following command Note: This is a demo illustrates UDP and Multicasting API. npm run demo:p2p-chat p2p-chat REPL Demo You can try demo WebExtension that provides a REPL in the sidebar exposing all of the UDPSocket API, which you can run in Firefox Nightly via following command Note: Commands recognized by REPL correspond to the API functions names and all the parameters are names prefixed by -- and followed by corresponding values. npm run demo:dgram TCPSocket API TCPSocket API provides a client and server socket APIs for TCP networking. Example Following example starts echo TCP server on port 8090. It will accept incoming connections read first chunk of data, respond by echoing messages back to. void (async () => { const encoder = new TextEncoder() const decoder = new TextDecoder() const server = await browser.TCPSocket.listen({ port: 8090 }) console.log("Started TCP Server", server) const onconnect = async client => { console.log("Client connected:", client) const message = await client.read() console.log("Received message from client:", decoder.decode(message)) const response = encoder.encode(`<echo>${decoder.decode(message)}</echo>`) await client.write(response.buffer) } for await (const client of server.connections) { onconnect(client) } })() Note: server.connections are represented via async iterator which can be consumed via for await, but be aware that connections are not buffered which is why handle each connection in onconnect function so our server can accept more connections. If you use await inside the for await block chances are you will miss connection, in which case it will be automatically closed. Following example connects to server from the example above writes a message to it and then reads message received back. void (async () => { const encoder = new TextEncoder() const decoder = new TextDecoder() const client = await browser.TCPSocket.connect({ host: "localhost", port: 8090 }) await client.opened console.log("Client connected:", client) await client.write(encoder.encode("Hello TCP").buffer) const response = await client.read() console.log("Received response:", decoder.decode(response)) })() You can’t perform that action at this time.
__label__pos
0.956505
Типизация редакса Posted on February 11, 2020 На одном из собесов дали задачку починить простенькое react-приложение с самописным flux-велосипедом. Вот кусок кода (комментарии мои): На что пришло письмо с комментариями: Ну тут понятно с последним комментарием никак нельзя согласиться. Сборка редьюсеров это никак не код библиотеки общего назначения, это код ИСПОЛЬЗОВАНИЯ библиотеки общего назначения, но это ладно, я готов с этим жить. Но вот проверки в рантайме… Не, я конечно промолчал и на следующий этап кастинга меня позвали, но все таки. В интернете опять кто то неправ… В том смысле что у меня у самого есть такой же код, я не готов спорить на эту тему. Но вот по поводу того что в тайпскрипте нельзя построить систему типов которая будет пресекать и бдить - я не согласен. Вот код (рекомендую открыть его здесь ) type Handler = (state: State, action: Action) => State; interface Handlers { setRole: Handler; someAction: Handler; } interface State { } const handlers: Handlers = { setRole : (state:State, action:Action) => Object.assign({}, state, { role: action.value }) ,someAction : (state:State, action:Action) => Object.assign({}, state) } const reducer = (state:State, action:Action) => handlers[action.type](state, action) interface Action { type: keyof Handlers value: any } reducer({}, { type: "setRole", value: "admin" }); reducer({}, { type: "badAction", value: "admin" }); Видно что компилятор сразу бросает ошибку на “badAction”: Как это работает? Если хотим добавить новый редьюсер - добавляем сначала строку вида имя_редьюсера: Handler в интерфейс Handlers и после этого послушно следуем ругани компилятора. Он заругается на то что Property 'имя_редьюсера' is missing in type ... и не успокоится пока мы не добавим новый редьюсер: Если мы попытаемся дернуть несуществующий экшин то ругаться компилятор будет вот так: Если мы удалим запись из интерфейса Handlers но не удалим хандлер из const handlers - получим несоответствие типов. То же самое если сделаем наоборот - удалим из const handlers но не удалим из интерфейса. Единственный сценарий который не отслеживается - это когда все у нас хорошо и красиво прописано но нигде в коде не используется. Да, это не так удобно как динамичное создание редьюсеров или их merge, не спорю, но если экшинов до 50 - жить можно вполне. Сами хандлеры не обязательно прописывать в одном файле, можно ипортировать. В общем на определенных масштабах вполне себе можно жить и типобезопасно и относительно удобно. А идет все это зло с примеров от производителя: Мораль - читая доки иногда стоит и мозг включать.
__label__pos
0.505281
own_name_on_connection Description: [ CCode ( cname = "g_bus_own_name_on_connection_with_closures" ) ] [ Version ( since = "2.26" ) ] public uint own_name_on_connection (DBusConnection connection, string name, BusNameOwnerFlags flags, owned BusNameAcquiredCallback? name_acquired_closure = null, owned BusNameLostCallback? name_lost_closure = null) Version of g_bus_own_name_on_connection using closures instead of callbacks for easier binding in other languages. Parameters: connection a DBusConnection name the well-known name to own flags a set of flags from the BusNameOwnerFlags enumeration name_acquired_closure Closure to invoke when name is acquired or null name_lost_closure Closure to invoke when name is lost or null Returns: an identifier (never 0) that can be used with unown_name to stop owning the name. Namespace: GLib.Bus Package: gio-2.0
__label__pos
0.561213
You can configure OKD to use VMware vSphere VMDKs as to back PersistentVolumes. This configuration can include using VMware vSphere VMDKs as persistent storage for application data. The vSphere Cloud Provider allows using vSphere-managed storage in OKD and supports every storage primitive that Kubernetes uses: • PersistentVolume (PV) • PersistentVolumesClaim (PVC) • StorageClass PersistentVolumes requested by stateful containerized applications can be provisioned on VMware vSAN, VVOL, VMFS, or NFS datastores. Kubernetes PVs are defined in Pod specifications. They can reference VMDK files directly if you use Static Provisioning or PVCs when you use Dynamic Provisioning, which is preferred. The latest updates to the vSphere Cloud Provider are in vSphere Storage for Kubernetes. Before you begin Requirements VMware vSphere Standalone ESXi is not supported. • vSphere version 6.0.x minimum recommended version 6.7 U1b is required if you intend to support a complete VMware Validate Design. • vSAN, VMFS and NFS supported. • vSAN support is limited to one cluster in one vCenter. Prerequisites You must install the VMware Tools on each Node VM. See Installing VMware tools for more information. You can use the open source VMware govmomi CLI tool for additional configuration and troubleshooting. For example, see the following govc CLI configuration: export GOVC_URL='vCenter IP OR FQDN' export GOVC_USERNAME='vCenter User' export GOVC_PASSWORD='vCenter Password' export GOVC_INSECURE=1 Permissions Create and assign roles to the vSphere Cloud Provider. A vCenter user with the required set of privileges is required. In general, the vSphere user designated to the vSphere Cloud Provider must have the following permissions: • Read permission on the parent entities of the node VMs such as folder, host, datacenter, datastore folder, datastore cluster, and so on. • VirtualMachine.Inventory.Create/Delete permission on the vsphere.conf defined resource pool - this is used to create and delete test VMs. See the vSphere Documentation Center for steps to create a custom role, user, and role assignment. vSphere Cloud Provider supports OKD clusters that span multiple vCenters. Make sure that all above privileges are correctly set for all vCenters. Dynamic provisioning permissions Dynamic persistent volume creation is the recommended practice. Roles Privileges Entities Propagate to children manage-k8s-node-vms Resource.AssignVMToPool, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.Delete, VirtualMachine.Config.Settings Cluster, Hosts, VM Folder Yes manage-k8s-volumes Datastore.AllocateSpace, Datastore.FileManagement (Low level file operations) Datastore No k8s-system-read-and-spbm-profile-view StorageProfile.View (Profile-driven storage view) vCenter No Read-only (pre-existing default role) System.Anonymous, System.Read, System.View Datacenter, Datastore Cluster, Datastore Storage Folder No Static provisioning permissions Datastore.FileManagement is required for only the manage-k8s-volumes role, if you create PVCs to bind with statically provisioned PVs and set the reclaim policy to delete. When the PVC is deleted, associated statically provisioned PVs are also deleted. Roles Privileges Entities Propergate to Children manage-k8s-node-vms VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.RemoveDisk VM Folder Yes manage-k8s-volumes Datastore.FileManagement (Low level file operations) Datastore No Read-only (pre-existing default role) System.Anonymous, System.Read, System.View vCenter, Datacenter, Datastore Cluster, Datastore Storage Folder, Cluster, Hosts No …​ Procedure 1. Create a VM folder and move OKD Node VMs to this folder. 2. Set the disk.EnableUUID parameter to true for each Node VM. This setting ensures that the VMware vSphere’s Virtual Machine Disk (VMDK) always presents a consistent UUID to the VM, allowing the disk to be mounted properly. Every VM node that will be participating in the cluster must have the disk.EnableUUID parameter set to true. To set this value, follow the steps for either the vSphere console or govc CLI tool: 1. From the vSphere HTML Client navigate to VM propertiesVM OptionsAdvancedConfiguration Parametersdisk.enableUUID=TRUE 2. Or using the govc CLI, find the Node VM paths: $govc ls /datacenter/vm/<vm-folder-name> 1. Set disk.EnableUUID to true for all VMs: $govc vm.change -e="disk.enableUUID=1" -vm='VM Path' If OKD node VMs are created from a virtual machine template, then you can set disk.EnableUUID=1 on the template VM. VMs cloned from this template inherit this property. Configuring OKD for vSphere You can configure OKD for vSphere in two ways: Option 1: Configuring OKD for vSphere using Ansible You can configure OKD for VMware vSphere (VCP) by modifying the Ansible inventory file. These changes can be made before installation, or to an existing cluster. Procedure 1. Add the following to the Ansible inventory file: [OSEv3:vars] openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username=administrator@vsphere.local (1) openshift_cloudprovider_vsphere_password=<password> openshift_cloudprovider_vsphere_host=10.x.y.32 (2) openshift_cloudprovider_vsphere_datacenter=<Datacenter> (3) openshift_cloudprovider_vsphere_datastore=<Datastore> (4) 1 The user name with the appropriate permissions to create and attach disks in vSphere. 2 The vCenter server address. 3 The vCenter Datacenter name where the OKD VMs are located. 4 The datastore used for creating VMDKs. 2. Run the deploy_cluster.yml playbook. $ ansible-playbook -i <inventory_file> \ playbooks/deploy_cluster.yml Installing with Ansible also creates and configures the following files to fit your vSphere environment: • /etc/origin/cloudprovider/vsphere.conf • /etc/origin/master/master-config.yaml • /etc/origin/node/node-config.yaml As a reference, a full inventory is shown as follows: The openshift_cloudprovider_vsphere_ values are required for OKD to be able to create vSphere resources such as VMDKs on datastores for persistent volumes. $ cat /etc/ansible/hosts [OSEv3:children] ansible masters infras apps etcd nodes lb [OSEv3:vars] become=yes ansible_become=yes ansible_user=root oreg_auth_user=service_account (1) oreg_auth_password=service_account_token (1) openshift_deployment_type=openshift-enterprise # Required per https://access.redhat.com/solutions/3480921 oreg_url=registry.access.redhat.com/openshift3/ose-${component}:${version} openshift_examples_modify_imagestreams=true # vSphere Cloud provider openshift_cloudprovider_kind=vsphere openshift_cloudprovider_vsphere_username="[email protected]" openshift_cloudprovider_vsphere_password="password" openshift_cloudprovider_vsphere_host="vcsa65-dc1.example.com" openshift_cloudprovider_vsphere_datacenter=Datacenter openshift_cloudprovider_vsphere_cluster=Cluster openshift_cloudprovider_vsphere_resource_pool=ResourcePool openshift_cloudprovider_vsphere_datastore="datastore" openshift_cloudprovider_vsphere_folder="folder" # Service catalog openshift_hosted_etcd_storage_kind=dynamic openshift_hosted_etcd_storage_volume_name=etcd-vol openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"] openshift_hosted_etcd_storage_volume_size=1G openshift_hosted_etcd_storage_labels={'storage': 'etcd'} openshift_master_ldap_ca_file=/home/cloud-user/mycert.crt openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=openshift,dc=com)'}] # Setup vsphere registry storage openshift_hosted_registry_storage_kind=vsphere openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] openshift_hosted_registry_replicas=1 openshift_hosted_router_replicas=3 openshift_master_cluster_method=native openshift_node_local_quota_per_fsgroup=512Mi default_subdomain=example.com openshift_master_cluster_hostname=openshift.example.com openshift_master_cluster_public_hostname=openshift.example.com openshift_master_default_subdomain=apps.example.com os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy' osm_use_cockpit=true # Red Hat subscription name and password rhsub_user=username rhsub_pass=password rhsub_pool=8a85f9815e9b371b015e9b501d081d4b # metrics openshift_metrics_install_metrics=true openshift_metrics_storage_kind=dynamic openshift_metrics_storage_volume_size=25Gi # logging openshift_logging_install_logging=true openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=30Gi openshift_logging_elasticsearch_storage_type=pvc openshift_logging_es_cluster_size=1 openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_fluentd_nodeselector={"node-role.kubernetes.io/infra": "true"} openshift_logging_storage_kind=dynamic #registry openshift_public_hostname=openshift.example.com [ansible] localhost [masters] master-0.example.com vm_name=master-0 ipv4addr=10.x.y.103 master-1.example.com vm_name=master-1 ipv4addr=10.x.y.104 master-2.example.com vm_name=master-2 ipv4addr=10.x.y.105 [infras] infra-0.example.com vm_name=infra-0 ipv4addr=10.x.y.100 infra-1.example.com vm_name=infra-1 ipv4addr=10.x.y.101 infra-2.example.com vm_name=infra-2 ipv4addr=10.x.y.102 [apps] app-0.example.com vm_name=app-0 ipv4addr=10.x.y.106 app-1.example.com vm_name=app-1 ipv4addr=10.x.y.107 app-2.example.com vm_name=app-2 ipv4addr=10.x.y.108 [etcd] master-0.example.com master-1.example.com master-2.example.com [lb] haproxy-0.example.com vm_name=haproxy-0 ipv4addr=10.x.y.200 [nodes] master-0.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-1.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true master-2.example.com openshift_node_group_name="node-config-master" openshift_schedulable=true infra-0.example.com openshift_node_group_name="node-config-infra" infra-1.example.com openshift_node_group_name="node-config-infra" infra-2.example.com openshift_node_group_name="node-config-infra" app-0.example.com openshift_node_group_name="node-config-compute" app-1.example.com openshift_node_group_name="node-config-compute" app-2.example.com openshift_node_group_name="node-config-compute" 1 If you use a container registry that requires authentication, such as the default container image registry, specify the credentials for that account. See Accessing and Configuring the Red Hat Registry. Deploying a vSphere VM environment is not officially supported by Red Hat, but it can be configured. Option 2: Manually configuring OKD for vSphere Manually configuring master hosts for vSphere Perform the following on all master hosts. Procedure 1. Edit the master configuration file at /etc/origin/master/master-config.yaml by default on all masters and update the contents of the apiServerArguments and controllerArguments sections: kubernetesMasterConfig: ... apiServerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" controllerArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, master-config.yaml must be in /etc/origin/master rather than /etc/. 2. When you configure OKD for vSphere using Ansible, the /etc/origin/cloudprovider/vsphere.conf file is created automatically. Because you are manually configuring OKD for vSphere, you must create the file. Before you create the file, decide if you want multiple vCenter zones or not. The cluster installation process configures single-zone or single vCenter by default. However, deploying OKD in vSphere on different zones can be helpful to avoid single-point-of-failures, but creates the need for shared storage across zones. If an OKD node host goes down in zone "A" and the pods should be moved to zone "B". See Multiple zone limitations in the Kubernetes documentation for more information. • To configure a single vCenter server, use the following format for the /etc/origin/cloudprovider/vsphere.conf file: [Global] (1) user = "myusername" (2) password = "mypassword" (3) port = "443" (4) insecure-flag = "1" (5) datacenters = "mydatacenter" (6) [VirtualCenter "10.10.0.2"] (7) user = "myvCenterusername" password = "password" [Workspace] (8) server = "10.10.0.2" (9) datacenter = "mydatacenter" folder = "path/to/vms" (10) default-datastore = "shared-datastore" (11) resourcepool-path = "myresourcepoolpath" (12) [Disk] scsicontrollertype = pvscsi (13) [Network] public-network = "VM Network" (14) 1 Any properties set in the [Global] section are used for all specified vcenters unless overriden by the settings in the individual [VirtualCenter] sections. 2 vCenter username for the vSphere cloud provider. 3 vCenter password for the specified user. 4 Optional. Port number for the vCenter server. Defaults to port 443. 5 Set to 1 if the vCenter uses a self-signed certificate. 6 Name of the data center on which Node VMs are deployed. 7 Override specific [Global] properties for this Virtual Center. Possible setting scan be [Port], [user], [insecure-flag], [datacenters]. Any settings not specified are pulled from the [Global] section. 8 Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others. 9 IP Address or FQDN for the vCenter server. 10 Path to the VM directory for node VMs. 11 Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OKD 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required. 12 Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created. 13 Type of SCSI controller the VMDK will be attached to the VM as. 14 Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes. • To configure a multiple vCenter servers, use the following format for the /etc/origin/cloudprovider/vsphere.conf file: [Global] (1) user = "myusername" (2) password = "mypassword" (3) port = "443" (4) insecure-flag = "1" (5) datacenters = "us-east, us-west" (6) [VirtualCenter "10.10.0.2"] (7) user = "myvCenterusername" password = "password" [VirtualCenter "10.10.0.3"] port = "448" insecure-flag = "0" [Workspace] (8) server = "10.10.0.2" (9) datacenter = "mydatacenter" folder = "path/to/vms" (10) default-datastore = "shared-datastore" (11) resourcepool-path = "myresourcepoolpath" (12) [Disk] scsicontrollertype = pvscsi (13) [Network] public-network = "VM Network" (14) 1 Any properties set in the [Global] section are used for all specified vcenters unless overriden by the settings in the individual [VirtualCenter] sections. 2 vCenter username for the vSphere cloud provider. 3 vCenter password for the specified user. 4 Optional. Port number for the vCenter server. Defaults to port 443. 5 Set to 1 if the vCenter uses a self-signed certificate. 6 Name of the data centers on which Node VMs are deployed. 7 Override specific [Global] properties for this Virtual Center. Possible setting scan be [Port], [user], [insecure-flag], [datacenters]. Any settings not specified are pulled from the [Global] section. 8 Set any properties used for various vSphere Cloud Provider functionality. For example, dynamic provisioning, Storage Profile Based Volume provisioning, and others. 9 IP Address or FQDN for the vCenter server where the Cloud Provider communicates. 10 Path to the VM directory for node VMs. 11 Set to the name of the datastore to use for provisioning volumes using the storage classes or dynamic provisioning. Prior to OKD 3.9, if the datastore was located in a storage directory or is a member of a datastore cluster, the full path was required. 12 Optional. Set to the path to the resource pool where dummy VMs for Storage Profile Based volume provisioning must be created. 13 Type of SCSI controller the VMDK will be attached to the VM as. 14 Set to the network port group for vSphere to access the node, which is called VM Network by default. This is the node host’s ExternalIP that is registered with Kubernetes. 3. Restart the OKD host services: # master-restart api # master-restart controllers # systemctl restart atomic-openshift-node Manually configuring node hosts for vSphere Perform the following on all node hosts. Procedure To configure the OKD nodes for vSphere: 1. Edit the appropriate node configuration map and update the contents of the kubeletArguments section: kubeletArguments: cloud-provider: - "vsphere" cloud-config: - "/etc/origin/cloudprovider/vsphere.conf" The nodeName must match the VM name in vSphere in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant. 2. Restart the OKD services on all nodes. # systemctl restart atomic-openshift-node Applying Configuration Changes Start or restart OKD services on all master and node hosts to apply your configuration changes, see Restarting OKD services: # master-restart api # master-restart controllers # systemctl restart atomic-openshift-node Switching from not using a cloud provider to using a cloud provider produces an error message. Adding the cloud provider tries to delete the node because the node switches from using the hostname as the externalID (which would have been the case when no cloud provider was being used) to using the cloud provider’s instance-id (which is what the cloud provider specifies). To resolve this issue: 1. Log in to the CLI as a cluster administrator. 2. Check and back up existing node labels: $ oc describe node <node_name> | grep -Poz '(?s)Labels.*\n.*(?=Taints)' 3. Delete the nodes: $ oc delete node <node_name> 4. On each node host, restart the OKD service. # systemctl restart origin-node 5. Add back any labels on each node that you previously had. Configuring OKD to use vSphere storage OKD supports VMware vSphere’s Virtual Machine Disk (VMDK) volumes. You can provision your OKD cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. OKD creates the disk in vSphere and attaches the disk to the correct instance. The OKD persistent volume (PV) framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. vSphere VMDK volumes can be provisioned dynamically. PVs are not bound to a single project or namespace; they can be shared across the OKD cluster. PV claims, however, are specific to a project or namespace and can be requested by users. High availability of storage in the infrastructure is left to the underlying storage provider. Prerequisites Before creating PVs using vSphere, ensure your OKD cluster meets the following requirements: • OKD must first be configured for vSphere. • Each node host in the infrastructure must match the vSphere VM name. • Each node host must be in the same resource group. Dynamically Provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the preferred provisioning method. 1. If you did not specify the openshift_cloudprovider_kind=vsphere and openshift_vsphere_* variables in the Ansible inventory file when you provisioned the cluster, you must manually create the following StorageClass to use the vsphere-volume provisioner: $ oc get --export storageclass vsphere-standard -o yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: "vsphere-standard" (1) provisioner: kubernetes.io/vsphere-volume (2) parameters: diskformat: thin (3) datastore: "YourvSphereDatastoreName" (4) reclaimPolicy: Delete 1 The name of the StorageClass. 2 The type of storage provisioner. Specify vsphere-volume. 3 The type of disk. Specify either zeroedthick or thin. 4 The source datastore where the disks will be created. 2. After you request a PV, using the StorageClass shown in the previous step, OKD automatically creates VMDK disks in the vSphere infrastructure. To verify that the disks were created, use the Datastore browser in vSphere. vSphere-volume disks are ReadWriteOnce access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information. Statically Provisioning VMware vSphere volumes Storage must exist in the underlying infrastructure before it can be mounted as a volume in OKD. After ensuring OKD is configured for vSphere, all that is required for OKD and vSphere is a VM folder path, file system type, and the PersistentVolume API. Creating PersistentVolumes 1. Define a PV object definition, for example vsphere-pv.yaml: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 (1) spec: capacity: storage: 2Gi (2) accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: (3) volumePath: "[datastore1] volumes/myDisk" (4) fsType: ext4 (5) 1 The name of the volume. This must be how it is identified by PV claims or from pods. 2 The amount of storage allocated to this volume. 3 The volume type being used. This example uses vsphereVolume. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. You must enclose the datastore name in square brackets ([]) in the volume definition, as shown. 5 The file system type to mount. For example, ext4, xfs, or other file-systems. Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. 2. Create the PV: $ oc create -f vsphere-pv.yaml persistentvolume "pv0001" created 3. Verify that the PV was created: $ oc get pv NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 2Gi RWO Available 2s Now you can request storage using PV claims, which can now use your PV. PV claims only exist in the user’s namespace and can only be referenced by a pod within that same namespace. Any attempt to access a PV from a different namespace causes the pod to fail. Formatting VMware vSphere volumes Before OKD mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the given file system. Because OKD formats them before the first use, you can use unformatted vSphere volumes as PVs. Configuring the OKD registry for vSphere Configuring the OKD registry for vSphere using Ansible Procedure To configure the Ansible inventory for the registry to use a vSphere volume: [OSEv3:vars] # vSphere Provider Configuration openshift_hosted_registry_storage_kind=vsphere (1) openshift_hosted_registry_storage_access_modes=['ReadWriteOnce'] (2) openshift_hosted_registry_storage_annotations=['volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/vsphere-volume'] (3) openshift_hosted_registry_replicas=1 (4) 1 The storage type. 2 vSphere volumes only support RWO. 3 The annotation for the volume. 4 The number of replicas to configure. The brackets in the configuration file above are required. Dynamically provisioning storage for OKD registry To use vSphere volume storage, edit the registry’s configuration file and mount to the registry pod. Procedure 1. Create a new configuration file from the vSphere volume: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: vsphere-registry-storage annotations: volume.beta.kubernetes.io/storage-class: vsphere-standard spec: accessModes: - ReadWriteOnce resources: requests: storage: 30Gi 2. Create the file in OKD: $ oc create -f pvc-registry.yaml 3. Update the volume configuration to use the new PVC: $ oc set volume dc docker-registry --add --name=registry-storage -t \ pvc --claim-name=vsphere-registry-storage --overwrite 4. Redeploy the registry to read the updated configuration: $ oc rollout latest docker-registry -n default 5. Verify the volume has been assigned: $ oc set volume dc docker-registry -n default Manually provisioning storage for OKD registry Running the following commands manually creates storage, which is used to create storage for the registry if a StorageClass is unavailable or not used. # VMFS cd /vmfs/volumes/datastore1/ mkdir kubevols # Not needed but good hygiene # VSAN cd /vmfs/volumes/vsanDatastore/ /usr/lib/vmware/osfs/bin/osfs-mkdir kubevols # Needed cd kubevols vmkfstools -c 25G registry.vmdk About Red Hat OpenShift Container Storage Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OKD either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OKD for deployment, management, and monitoring regardless if it is installed on OKD (converged) or with OKD (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS3.11 Deployment Guide. Backup of persistent volumes OKD provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots. To create a backup of PVs: 1. Stop the application using the PV. 2. Clone the persistent disk. 3. Restart the application. 4. Create a backup of the cloned disk. 5. Delete the cloned disk.
__label__pos
0.627895
Decent low profile card for dual DVI? Discussion in 'Video Cards' started by mashie, Apr 8, 2012. 1. mashie mashie Mawd Gawd Messages: 3,969 Joined: Oct 25, 2000 I need to drive two DVI 1920x1200 screens using a single card in a low-profile PCIe slot. What is the fastest card that will fit those limitation?   2. Raudulfr Raudulfr 2[H]4U Messages: 2,733 Joined: Sep 12, 2004 3. mashie mashie Mawd Gawd Messages: 3,969 Joined: Oct 25, 2000 Interesting, can you get 1920x1200 out of a HDMI port when you use a HDMI to DVI adaptor?   4. ShuttleMunky ShuttleMunky Limp Gawd Messages: 215 Joined: Apr 5, 2012 There should be no difference in terms of supported resolution when adaptors are used as far as i know.   5. David_CAN David_CAN Limp Gawd Messages: 186 Joined: Aug 16, 2011 Yes. For a while I've run a Dell Studio laptop with the HDMI port connected to a 24" 1920x1200 monitor using an HDMI to DVI cable. You can convert from HDMI to DVI without any 'conversion' happening, it's the same signal. Only area of concern I think is if you need dual link DVI (27"+ monitor or 120Hz HD signal). I think it's HDMI 1.3 or better to support dual link DVI modes but I'm not sure, it could be vendor specific.   6. Raudulfr Raudulfr 2[H]4U Messages: 2,733 Joined: Sep 12, 2004 Well, the Sapphire card apparently has HDMI 1.4. The specifications list "HDMI (with 3D)" and AFAIK only version 1.4 supports 3D.   7. mashie mashie Mawd Gawd Messages: 3,969 Joined: Oct 25, 2000 Thanks, I will go for the 6450 as it will fit and gives me the option to use a third screen if needed.   8. Raudulfr Raudulfr 2[H]4U Messages: 2,733 Joined: Sep 12, 2004
__label__pos
0.751535
Split one song into many in garage band? Discussion in 'Mac Apps and Mac App Store' started by bloomage, Nov 10, 2008. 1. bloomage macrumors newbie Joined: Jun 4, 2008 #1 Hey all, i was wondering if there was a way to take one long song and split it into several songs and put them in itunes with different names. i recorded a few songs in one take and want to split them. any help would be appreciated. thanks!   2. gauchogolfer macrumors 603 gauchogolfer Joined: Jan 28, 2005 Location: American Riviera #2 This would be very easy, assuming the audio is in a format that Garageband can understand. You might try it in Quicktime Pro if you have it also.   3. bloomage thread starter macrumors newbie Joined: Jun 4, 2008 #3 i have each song in a different track in garage band, so garage band can recognize the audio. and i could move it all as one song into itunes, but i want to split them.   4. PhilW macrumors newbie Joined: Jun 14, 2008 Location: Birmingham, UK #4 I have just used Garage Band for the first time and come across the same problem. I have recorded one side of a cassette tape using the software. I split (Cmd T) the 29 minute 'song' into the 8 or so actual songs on the cassette tape and then selected 'Share', 'Send Song to iTunes'. When iTunes finally opened, all I had was one 29 minute song. So how do I make it into 8 songs in iTunes? Thanks   Share This Page
__label__pos
0.938557
SNIP I want to keep my featured gallery running the slides like the latest 2 updates, but below it, instead of the style of recent post updates and older post updates that i have, which are those squares, i want to turn each one of those squares into something like on tmz.com how they have there homepage . you know with a full layout of the actual update with text, like here at this screen shot http://i54.tinypic.com/2z5st3t.png Or i guess another way of looking at it, when you click on one of my post and it takes you to the post page, the exact look of the actual post, i want that design on the homepage. Another example would be over at thechive.com how they have there updates Is it possible for me to change this? Is there anyway i can make this happen somehow because im very lost on how to make it happen. Thanks for all your help You've given two examples of some ideas that you want for your site. Why not view the source code for each of those sites, find the elements that are used to code those ideas, and copy/recreate them for your site. Firebug is a great tool for firefox. You can view a page, click a button that lets you select a specific element on that page, then it will show you the exact html and css for those elements. teedoff, I appreciate your response. I am new to CSS and Wordpress and although I am proud of myself for my design being its the first major thing I have done, it took me tons of hours to complete. I have been told about Firebug but I clearly do not understand it at all. I think im expecting to see something that I'm not seeing. As far as looking at the source code for my site and the recent post area, would that be the code in the style.css in wordpress, or something else? Here is what I have in style.css but I don't think this is what I'm looking for /* 2.5 Recent Posts */ #recent-posts { margin: 0 -20px 10px 0; } #recent-posts h3 { margin: 0 0 20px 0; } #recent-posts .post { position: relative; float: left; width: 300px; height: 185px; margin: 0 20px 20px 0; background: #d4d4d4 } #recent-posts .post .heading { width: 270px; padding: 15px; position: absolute; bottom: 0; left: 0; background: rgba(0,0,0,0.8); } #recent-posts .post .heading h2 { font-size: 18px; text-transform: uppercase; } #recent-posts .post .heading h2 a { color: #fff; } #recent-posts .meta { font-size: 11px; text-transform: uppercase; color: #66cc33; } #recent-posts .meta a { font-weight: bold; } /* 2.6 Older Posts */ #older-posts { margin: 0 -20px 30px 0; } #older-posts h3 { margin: 0 0 20px 0; } #older-posts li { float: left; width: 300px; margin: 0 20px -1px 0; padding: 10px 0; border-top: 1px solid #666666; border-bottom: 1px solid #666666; } #older-posts li img { float: left; margin: 0 07px 0 0; } #older-posts li .info { } #older-posts li .info span { display: block; } #older-posts li .info .meta-old { color: #666666; font-size: 11px; } #older-posts li .info .title-old { font-weight: bold; font-size: 22px; line-height: 20px; } #older-posts li .info .title-old a { color: #000; } #older-posts li .info .title-old a:hover { color: #6f6f6f; text-decoration: none; } Can you give me directions? Thank you Hi rwalkerfla, Is it possible for you to provide a link to your site? I'd love to try and help but without looking at your current css it's kind of hard.
__label__pos
0.899536
firmfunda   maths > calculus-limits Algebra of Limits     what you'll learn... Understanding Algebra of Limits  »  Finding limit of function as sub-expressions     →  f(x)±g(x)f(x)±g(x)     →  f(x)×g(x)f(x)×g(x)     →  f(x)÷g(x)f(x)÷g(x)     →  [f(x)]n[f(x)]n     →  f(x)f(x) and y=g(x)y=g(x) algebra is about operations + - * / ^ "Algebra of limits" means: Properties to find limit of functions given as algebraic operations of several functions. Let us see this in detail. The basic mathematical operations are  •  addition and subtraction  •  multiplication and division  •  powers, roots, and logarithms. Two or more function g(x) h(x) can form another function f(x). f(x)=g(x)h(x)      where is one of the mathematical operations. Will there be any relationship between the limits of the functions limg(x) ; limh(x) and the limit of the function limf(x)? Algebra of limits analyses this and provides the required knowledge. caution before using In computing limit of a function, the value of the function or limit of the function changes :  •  when a function evaluates to 0 in denominator  •  When a function evaluates to  •  at the discontinuous points of piecewise functions. When applying algebra of limits to elements of a function, look out for the following cases.  •  Expressions evaluating to 10 or 00 or ×0 or     eg: 1x-1, x2-1x-1, tanxcotx, tanxsecx  •  Expressions evaluating to - or +(-)      eg: x2-4xx-1x  •  discontinuous points of piecewise functions      eg: {1  if  x>00  if  x0 The algebra of limit applies only when the above values do not occur. Example: limx1x2-1x-1     limx1(x2-1)limx1(x-1) The above is not applicable because it evaluates to 00. Algebra of limits helps to simplify finding limit by applying the limit to sub-expressions of a function. Algebra of limits may not be applicable to the sub-expressions evaluating to 0 or or at discontinuities. summary Algebra of Limits: If a function f(x) consists of mathematical operations of sub-expressions f1(x), f2(x), etc. then the limit of the function can be applied to the sub-expressions. If any of the sub-expressions or combination of them evaluate to 0 or then, the algebra of limit may not be applied to those sub-expressions. results limit of sum (or difference) is sum (or difference) of limits. Limit of Sum or Difference: Given that limxaf(x) and limxag(x) exists. Then limxa(f(x)±g(x))     =limxaf(x)±limxag(x) limit of product is product of limits. Limit of Product: Given that limxaf(x) and limxag(x) exists. Then limxa(f(x)g(x))     =limxaf(x)limxag(x) limit of quotient is quotient of limits. Limit of Quotient: Given that limxaf(x) and limxag(x) exists. Then limxa(f(x)g(x))     =limxaf(x)limxag(x) limit of exponent is exponent of limit. Limit of Exponent: Given that limxaf(x) exists. Then limxa[f(x)n]     =[limxaf(x)]n limit of root is root of limit. Limit of Root: Given that limxaf(x) and limxag(x) exists. Then limxa[f(x)1n]     =[limxaf(x)]1n The variable in a limit can be changed. Given limx0sinxx=1 ; limx0sin(x2)x     =limx0xsin(x2)x2     =limx0x×limy0sinyy where y=x2 by that definition, limx0 changes to limy0 .     =0×1     =0 Note: If, in another case, y=cosx then limx0 changes to limy1, as y=cos0=1. Change of variable in a Limit: Given that y=g(x) exists at x=a. Then limxaf(x)     =limyg(a)f(g-1(y)) summary Algebra of Limits     →  If sub-expressions are not evaluating to 0 or then limit can be applied to sub-expressions.     →  If sub-expressions are evaluating to 0 or , then look for the forms of 00. Limit of Sum or Difference  »  Limit distributes over Addition and Subtraction when value is not -     →  limxa[f(x)±g(x)]=limxaf(x)±limxag(x) Limit of Product  »  Limit distributes over multiplication when value is not ×0     →  limxa[f(x)×g(x)]=limxaf(x)×limxag(x) Limit of Quotient  »  Limit distributes over division when value is not 0÷0 or ÷     →  limxa[f(x)÷g(x)]=limxaf(x)÷limxag(x) Limit of Exponent  »  Limit distributes over exponent when value is not 0 or 00     →  limxa[f(x)]n=[limxaf(x)]n Limit of Root  »  Limit distributes over root when value is not 0 or 00     →  limxa[f(x)]1n=[limxaf(x)]1n Change of Variable in a Limit  »  variable can be substituted when value is not any of the forms of 00     →  limxaf(x) =limyg(a)f(g-1(y)) Outline The outline of material to learn "limits (calculus)" is as follows. Note : click here for detailed outline of Limits(Calculus).     →   Indeterminate and Undefined     →   Indeterminate value in Functions     →   Expected Value     →   Continuity     →   Definition by Limits     →   Geometrical Explanation for Limits     →   Limit with Numerator and Denominator     →   Limits of Ratios - Examples     →   L'hospital Rule     →   Examining a function     →   Algebra of Limits     →   Limit of a Polynomial     →   Limit of Ratio of Zeros     →   Limit of ratio of infinities     →   limit of Binomial     →   Limit of Non-algebraic Functions
__label__pos
0.771958
Create an FCI with a premium file share (SQL Server on Azure VMs) APPLIES TO: SQL Server on Azure VM This article explains how to create a failover cluster instance (FCI) with SQL Server on Azure Virtual Machines (VMs) by using a premium file share. Premium file shares are Storage Spaces Direct (SSD)-backed, consistently low-latency file shares that are fully supported for use with failover cluster instances for SQL Server 2012 or later on Windows Server 2012 or later. Premium file shares give you greater flexibility, allowing you to resize and scale a file share without any downtime. To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster best practices. Note It's now possible to lift and shift your failover cluster instance solution to SQL Server on Azure VMs using Azure Migrate. See Migrate failover cluster instance to learn more. Prerequisites Before you complete the instructions in this article, you should already have: Mount premium file share 1. Sign in to the Azure portal. and go to your storage account. 2. Go to File Shares under File service, and then select the premium file share you want to use for your SQL storage. 3. Select Connect to bring up the connection string for your file share. 4. In the drop-down list, select the drive letter you want to use, and then copy both code blocks to Notepad. Copy both PowerShell commands from the file share connect portal 5. Use Remote Desktop Protocol (RDP) to connect to the SQL Server VM with the account that your SQL Server FCI will use for the service account. 6. Open an administrative PowerShell command console. 7. Run the commands that you saved earlier when you were working in the portal. 8. Go to the share by using either File Explorer or the Run dialog box (select Windows + R). Use the network path \\storageaccountname.file.core.windows.net\filesharename. For example, \\sqlvmstorageaccount.file.core.windows.net\sqlpremiumfileshare 9. Create at least one folder on the newly connected file share to place your SQL data files into. 10. Repeat these steps on each SQL Server VM that will participate in the cluster. Important • Consider using a separate file share for backup files to save the input/output operations per second (IOPS) and space capacity of this share for data and log files. You can use either a Premium or Standard File Share for backup files. • If you're on Windows 2012 R2 or earlier, follow these same steps to mount the file share that you're going to use as the file share witness. Add Windows cluster feature 1. Connect to the first virtual machine with RDP by using a domain account that's a member of the local administrators and that has permission to create objects in Active Directory. Use this account for the rest of the configuration. 2. Add failover clustering to each virtual machine. To install failover clustering from the UI, do the following on both virtual machines: 1. In Server Manager, select Manage, and then select Add Roles and Features. 2. In the Add Roles and Features wizard, select Next until you get to Select Features. 3. In Select Features, select Failover Clustering. Include all required features and the management tools. 4. Select Add Features. 5. Select Next, and then select Finish to install the features. To install failover clustering by using PowerShell, run the following script from an administrator PowerShell session on one of the virtual machines: $nodes = ("<node1>","<node2>") Invoke-Command $nodes {Install-WindowsFeature Failover-Clustering -IncludeAllSubFeature -IncludeManagementTools} Validate cluster Validate the cluster in the UI or by using PowerShell. To validate the cluster by using the UI, do the following on one of the virtual machines: 1. Under Server Manager, select Tools, and then select Failover Cluster Manager. 2. Under Failover Cluster Manager, select Action, and then select Validate Configuration. 3. Select Next. 4. Under Select Servers or a Cluster, enter the names of both virtual machines. 5. Under Testing options, select Run only tests I select. 6. Select Next. 7. Under Test Selection, select all tests except for Storage and Storage Spaces Direct, as shown here: Select cluster validation tests 8. Select Next. 9. Under Confirmation, select Next. The Validate a Configuration wizard runs the validation tests. To validate the cluster by using PowerShell, run the following script from an administrator PowerShell session on one of the virtual machines: Test-Cluster –Node ("<node1>","<node2>") –Include "Inventory", "Network", "System Configuration" After you validate the cluster, create the failover cluster. Create failover cluster To create the failover cluster, you need: • The names of the virtual machines that will become the cluster nodes. • A name for the failover cluster. • An IP address for the failover cluster. You can use an IP address that's not used on the same Azure virtual network and subnet as the cluster nodes. The following PowerShell script creates a failover cluster for Windows Server 2012 through Windows Server 2016. Update the script with the names of the nodes (the virtual machine names) and an available IP address from the Azure virtual network. New-Cluster -Name <FailoverCluster-Name> -Node ("<node1>","<node2>") –StaticAddress <n.n.n.n> -NoStorage Configure quorum Configure the quorum solution that best suits your business needs. You can configure a Disk Witness, a Cloud Witness, or a File Share Witness. For more information, see Quorum with SQL Server VMs. Test cluster failover Test the failover of your cluster. In Failover Cluster Manager, right-click your cluster, select More Actions > Move Core Cluster Resource > Select node, and then select the other node of the cluster. Move the core cluster resource to every node of the cluster, and then move it back to the primary node. If you can successfully move the cluster to each node, you're ready to install SQL Server. Test cluster failover by moving the core resource to the other nodes Create SQL Server FCI After you've configured the failover cluster, you can create the SQL Server FCI. 1. Connect to the first virtual machine by using RDP. 2. In Failover Cluster Manager, make sure that all the core cluster resources are on the first virtual machine. If necessary, move all resources to this virtual machine. 3. Locate the installation media. If the virtual machine uses one of the Azure Marketplace images, the media is located at C:\SQLServer_<version number>_Full. 4. Select Setup. 5. In the SQL Server Installation Center, select Installation. 6. Select New SQL Server failover cluster installation, and then follow the instructions in the wizard to install the SQL Server FCI. The FCI data directories need to be on the premium file share. Enter the full path of the share, in this format: \\storageaccountname.file.core.windows.net\filesharename\foldername. A warning will appear, telling you that you've specified a file server as the data directory. This warning is expected. Ensure that the user account you used to access the VM via RDP when you persisted the file share is the same account that the SQL Server service uses to avoid possible failures. Use file share as SQL data directories 7. After you complete the steps in the wizard, Setup will install a SQL Server FCI on the first node. 8. After Setup installs the FCI on the first node, connect to the second node by using RDP. 9. Open the SQL Server Installation Center, and then select Installation. 10. Select Add node to a SQL Server failover cluster. Follow the instructions in the wizard to install SQL Server and add the server to the FCI. Note If you used an Azure Marketplace gallery image with SQL Server, SQL Server tools were included with the image. If you didn't use one of those images, install the SQL Server tools separately. For more information, see Download SQL Server Management Studio (SSMS). 11. Repeat these steps on any other nodes that you want to add to the SQL Server failover cluster instance. Register with the SQL VM RP To manage your SQL Server VM from the portal, register it with the SQL IaaS Agent extension (RP) in lightweight management mode, currently the only mode that's supported with FCI and SQL Server on Azure VMs. Register a SQL Server VM in lightweight mode with PowerShell (-LicenseType can be PAYG or AHUB): # Get the existing compute VM $vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name> # Register SQL VM with 'Lightweight' SQL IaaS agent New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -Location $vm.Location ` -LicenseType ???? -SqlManagementType LightWeight Configure connectivity To route traffic appropriately to the current primary node, configure the connectivity option that's suitable for your environment. You can create an Azure load balancer or, if you're using SQL Server 2019 CU2 (or later) and Windows Server 2016 (or later), you can use the distributed network name feature instead. For more details about cluster connectivity options, see Route HADR connections to SQL Server on Azure VMs. Limitations Next steps If you haven't already done so, configure connectivity to your FCI with a virtual network name and an Azure load balancer or distributed network name (DNN). If premium file shares are not the appropriate FCI storage solution for you, consider creating your FCI by using Azure shared disks or Storage Spaces Direct instead. To learn more, see an overview of FCI with SQL Server on Azure VMs and cluster configuration best practices. For more information, see:
__label__pos
0.843402
Was this page helpful? Additional feedback? 1500 characters remaining Export (0) Print Expand All Revoke-NfsClientLock Windows Server 2012 R2 and Windows 8.1 Revoke-NfsClientLock Releases file locks that a client computer holds on an NFS server. Syntax Parameter Set: v4 Revoke-NfsClientLock [-Path] <String[]> [[-LockType] <ClientLockType[]> ] [[-StateId] <String> ] [-AsJob] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] Parameter Set: InputObject (cdxml) Revoke-NfsClientLock -InputObject <CimInstance[]> [-AsJob] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] Parameter Set: v3 Revoke-NfsClientLock [-Path] <String[]> [[-LockType] <ClientLockType[]> ] [[-ComputerName] <String> ] [-AsJob] [-CimSession <CimSession[]> ] [-PassThru] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>] Detailed Description The Revoke-NfsClientLock cmdlet releases locks that a client computer currently holds for files that a Network File System (NFS) server shares. You can select the locks to revoke by specifying the path of the files. The Path parameter supports wildcards, so you can revoke multiple locks that match the specified pattern. You can revoke NFS v4 protocol locks by specifying the state identifier of the locked files. You can revoke NFS v3 protocol locks by specifying the computer name of the NFS client that holds the lock on the files. Parameters -AsJob Aliases none Required? false Position? named Default Value none Accept Pipeline Input? false Accept Wildcard Characters? false -CimSession<CimSession[]> Runs the cmdlet in a remote session or on a remote computer. Enter a computer name or a session object, such as the output of a New-CimSession or Get-CimSession cmdlet. The default is the current session on the local computer. Aliases Session Required? false Position? named Default Value none Accept Pipeline Input? false Accept Wildcard Characters? false -ComputerName<String> Specifies the host name of the client computer that holds a lock to files on the NFS server. Aliases client,name,ClientComputer Required? false Position? 3 Default Value none Accept Pipeline Input? True (ByPropertyName) Accept Wildcard Characters? false -InputObject<CimInstance[]> Specifies the input to this cmdlet. You can use this parameter, or you can pipe the input to this cmdlet. Aliases none Required? true Position? named Default Value none Accept Pipeline Input? True (ByValue) Accept Wildcard Characters? false -LockType<ClientLockType[]> Specifies an array of lock types on the file. Valid values are NLM and NFS. Locks acquired by NFS clients that mount files to an NFS share by using the NFS v2 or the NFS v3 protocol are Network Lock Manager (NLM) locks. Locks acquired by NFS clients that mount files by using the NFS v4.1 protocol are NFS locks. Aliases type Required? false Position? 2 Default Value none Accept Pipeline Input? True (ByPropertyName) Accept Wildcard Characters? false -PassThru Returns an object representing the item with which you are working. By default, this cmdlet does not generate any output. Aliases none Required? false Position? named Default Value none Accept Pipeline Input? false Accept Wildcard Characters? false -Path<String[]> Specifies an array of paths and file names on the NFS server. If there are multiple clients that have multiple locks on the same file, all the locks are revoked. Aliases file,LockedFile Required? true Position? 1 Default Value none Accept Pipeline Input? True (ByPropertyName) Accept Wildcard Characters? false -StateId<String> Specifies the state identifier of the locks to revoke. StateId applies only to locks that an NFS client acquires by using the NFS v4.1 protocol. Aliases none Required? false Position? 3 Default Value none Accept Pipeline Input? True (ByPropertyName) Accept Wildcard Characters? false -ThrottleLimit<Int32> Specifies the maximum number of concurrent operations that can be established to run the cmdlet. If this parameter is omitted or a value of 0 is entered, then Windows PowerShell® calculates an optimum throttle limit for the cmdlet based on the number of CIM cmdlets that are running on the computer. The throttle limit applies only to the current cmdlet, not to the session or to the computer. Aliases none Required? false Position? named Default Value none Accept Pipeline Input? false Accept Wildcard Characters? false -Confirm Prompts you for confirmation before running the cmdlet. Required? false Position? named Default Value false Accept Pipeline Input? false Accept Wildcard Characters? false -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. Required? false Position? named Default Value false Accept Pipeline Input? false Accept Wildcard Characters? false <CommonParameters> This cmdlet supports the common parameters: -Verbose, -Debug, -ErrorAction, -ErrorVariable, -OutBuffer, and -OutVariable. For more information, see    about_CommonParameters (http://go.microsoft.com/fwlink/p/?LinkID=113216). Inputs The input type is the type of the objects that you can pipe to the cmdlet. Outputs The output type is the type of the objects that the cmdlet emits. • Nothing Examples Example 1: Revoke the lock on all files This command revokes the lock on all the files on a local NFS server that have a path that begins with c:\shares\. PS C:\> Revoke-NfsClientLock -Path c:\shares\* Related topics Was this page helpful? (1500 characters remaining) Thank you for your feedback Community Additions ADD Show: © 2015 Microsoft
__label__pos
1
What is 505 divided by 950 using long division? Confused by long division? By the end of this article you'll be able to divide 505 by 950 using long division and be able to apply the same technique to any other long division problem you have! Let's take a look. Want to quickly learn or show students how to solve 505 divided by 950 using long division? Play this very quick and fun video now! Okay so the first thing we need to do is clarify the terms so that you know what each part of the division is: • The first number, 505, is called the dividend. • The second number, 950 is called the divisor. What we'll do here is break down each step of the long division process for 505 divided by 950 and explain each of them so you understand exactly what is going on. 505 divided by 950 step-by-step guide Step 1 The first step is to set up our division problem with the divisor on the left side and the dividend on the right side, like we have it below: 950505 Step 2 We can work out that the divisor (950) goes into the first digit of the dividend (5), 0 time(s). Now we know that, we can put 0 at the top: 0 950505 Step 3 If we multiply the divisor by the result in the previous step (950 x 0 = 0), we can now add that answer below the dividend: 0 950505 0 Step 4 Next, we will subtract the result from the previous step from the second digit of the dividend (5 - 0 = 5) and write that answer below: 0 950505 -0 5 Step 5 Move the second digit of the dividend (0) down like so: 0 950505 -0 50 Step 6 The divisor (950) goes into the bottom number (50), 0 time(s), so we can put 0 on top: 00 950505 -0 50 Step 7 If we multiply the divisor by the result in the previous step (950 x 0 = 0), we can now add that answer below the dividend: 00 950505 -0 50 0 Step 8 Next, we will subtract the result from the previous step from the third digit of the dividend (50 - 0 = 50) and write that answer below: 00 950505 -0 50 -0 50 Step 9 Move the third digit of the dividend (5) down like so: 00 950505 -0 50 -0 505 Step 10 The divisor (950) goes into the bottom number (505), 0 time(s), so we can put 0 on top: 000 950505 -0 50 -0 505 Step 11 If we multiply the divisor by the result in the previous step (950 x 0 = 0), we can now add that answer below the dividend: 000 950505 -0 50 -0 505 0 Step 12 Next, we will subtract the result from the previous step from the fourth digit of the dividend (505 - 0 = 505) and write that answer below: 000 950505 -0 50 -0 505 -0 505 So, what is the answer to 505 divided by 950? If you made it this far into the tutorial, well done! There are no more digits to move down from the dividend, which means we have completed the long division problem. Your answer is the top number, and any remainder will be the bottom number. So, for 505 divided by 950, the final solution is: 0 Remainder 505 Cite, Link, or Reference This Page If you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support! • "What is 505 Divided by 950 Using Long Division?". VisualFractions.com. Accessed on April 14, 2021. https://visualfractions.com/calculator/long-division/what-is-505-divided-by-950-using-long-division/. • "What is 505 Divided by 950 Using Long Division?". VisualFractions.com, https://visualfractions.com/calculator/long-division/what-is-505-divided-by-950-using-long-division/. Accessed 14 April, 2021. • What is 505 Divided by 950 Using Long Division?. VisualFractions.com. Retrieved from https://visualfractions.com/calculator/long-division/what-is-505-divided-by-950-using-long-division/. Extra calculations for you Now you've learned the long division approach to 505 divided by 950, here are a few other ways you might do the calculation: • Using a calculator, if you typed in 505 divided by 950, you'd get 0.5316. • You could also express 505/950 as a mixed fraction: 0 505/950 • If you look at the mixed fraction 0 505/950, you'll see that the numerator is the same as the remainder (505), the denominator is our original divisor (950), and the whole number is our final answer (0). Long Division Calculator Enter another long division problem to solve / Next Long Division Problem Eager for more long division but can't be bothered to type two numbers into the calculator above? No worries. Here's the next problem for you to solve: What is 505 divided by 951 using long division? Random Long Division Problems If you made it this far down the page then you must REALLY love long division problems, huh? Below are a bunch of randomly generated calculations for your long dividing pleasure:
__label__pos
0.786545